Chronos - Learning The Language of Time Series
Chronos - Learning The Language of Time Series
Chronos - Learning The Language of Time Series
Abdul Fatir Ansari1∗, Lorenzo Stella1∗, Caner Turkmen1 , Xiyuan Zhang2†, Pedro Mer-
cado1 , Huibin Shen1 , Oleksandr Shchur1 , Syama Sundar Rangapuram1 , Sebastian Pineda
Arango3‡, Shubham Kapoor1 , Jasper Zschiegner, Danielle C. Maddix1 , Michael W. Ma-
honey4 , Kari Torkkola4 , Andrew Gordon Wilson1 , Michael Bohlke-Schneider1 , Yuyang Wang1
{ansarnd,stellalo}@amazon.com
1
Amazon Web Services, 2 UC San Diego, 3 University of Freiburg, 4 Amazon Supply Chain Optimization Technologies
arXiv:2403.07815v1 [cs.LG] 12 Mar 2024
Abstract
We introduce Chronos, a simple yet effective framework for pretrained probabilistic time
series models. Chronos tokenizes time series values using scaling and quantization into
a fixed vocabulary and trains existing transformer-based language model architectures on
these tokenized time series via the cross-entropy loss. We pretrained Chronos models
based on the T5 family (ranging from 20M to 710M parameters) on a large collection of
publicly available datasets, complemented by a synthetic dataset that we generated via
Gaussian processes to improve generalization. In a comprehensive benchmark consisting of
42 datasets, and comprising both classical local models and deep learning methods, we show
that Chronos models: (a) significantly outperform other methods on datasets that were
part of the training corpus; and (b) have comparable and occasionally superior zero-shot
performance on new datasets, relative to methods that were trained specifically on them.
Our results demonstrate that Chronos models can leverage time series data from diverse
domains to improve zero-shot accuracy on unseen forecasting tasks, positioning pretrained
models as a viable tool to greatly simplify forecasting pipelines.
1 Introduction
Time series forecasting is an essential component of decision-making across various domains, including retail,
energy, finance, healthcare, climate science, among others. Traditionally, forecasting has been dominated by
statistical models such as ARIMA and ETS. These have served as reliable tools, at least until the recent shift
towards deep learning techniques (Hyndman & Athanasopoulos, 2018; Benidis et al., 2022). This shift can be
attributed to the availability of large and diverse time series data sources, and the emergence of operational
forecasting problems (Kolassa & Januschowski, 2019) that play to the strengths of deep forecasters, i.e., the
ability to extract patterns out of a large collection of time series. Despite their impressive performance, deep
forecasters still operate in the standard regime of training and prediction on the same dataset. While there
have been works dedicated to transfer learning (Ye & Dai, 2018) and domain adaptation (Jin et al., 2022)
for forecasting, the field has yet to converge on a unified, general-purpose forecasting model, a goal that
remains a beacon for time series researchers.
The emergence of large language models (LLMs) with zero-shot learning capabilities has ignited interest
in developing “foundation models” for time series. In the context of LLMs, this interest has been pursued
through two main avenues: directly prompting pretrained LLMs in natural language (Gruver et al., 2023;
Xue & Salim, 2023) and fine-tuning LLMs for time series tasks (Zhou et al., 2023a; Jin et al., 2024). However,
these methods face significant limitations, notably the need for prompt engineering or fine-tuning for each
new task, or reliance on large-scale models (GPT-3 (Brown et al., 2020), Llama 2 (Touvron et al., 2023), etc.)
that demand substantial computational resources and time for inference. Recent concurrent work (Dooley
et al., 2023; Das et al., 2023; Rasul et al., 2023; Woo et al., 2024) also explores pretraining transformer-based
∗ Equal contribution.
† Work done during an internship at Amazon Web Services.
1
models with sophisticated time-series-specific designs on a large corpus of real and (or) synthetic time series
data.
In this work, we take a step back and ask: what are the fundamental differences between a language model
that predicts the next token, and a time series forecasting model that predicts the next values? Despite the
apparent distinction — tokens from a finite dictionary versus values from an unbounded, usually continuous
domain — both endeavors fundamentally aim to model the sequential structure of the data to predict future
patterns. Shouldn’t good language models “just work” on time series? This naive question prompts us to
challenge the necessity of time-series-specific modifications, and answering it led us to develop Chronos,
a language modeling framework minimally adapted for time series forecasting. Chronos tokenizes time
series into discrete bins through simple scaling and quantization of real values. In this way, we can train
off-the-shelf language models on this “language of time series,” with no changes to the model architecture
(see Figure 1 for a high-level depiction of Chronos). Remarkably, this straightforward approach proves
to be effective and efficient, underscoring the potential for language model architectures to address a broad
range of time series problems with minimal modifications.
2400 ⋯ 2142 ⋯ 2282 ⋯ 2245 ⋯ 2310 2400 ⋯ 2142 ⋯ 2282 ⋯ 2245 ⋯ 2310
Mean Scaling
Probabilities
Sampled
⋯ ⋯
Tokens
Predicted
2350
2350 2350
2350
2283 2350
2350
2320
entropy
Dequantization
and Unscaling
cross
Next Token ID
Context Tokens Probabilistic Forecast
Figure 1: High-level depiction of Chronos. (Left) The input time series is scaled and quantized to obtain a sequence
of tokens. (Center) The tokens are fed into a language model which may either be an encoder-decoder or a decoder-
only model. The model is trained using the cross-entropy loss. (Right) During inference, we autoregressively sample
tokens from the model and map them back to numerical values. Multiple trajectories are sampled to obtain a
predictive distribution.
For the development of a useful general-purpose time series forecasting model, the scarcity of publicly
available time series datasets, both in quantity and quality, is arguably more critical than the modeling
framework. In addition to the comprehensive collection of public datasets we used to train Chronos, a
central aspect of our approach is the integration of data augmentation strategies, including TSMix and
KernelSynth. TSMix randomly samples a set of base time series from different training datasets, and
generates new time series based on a convex combination of them; KernelSynth uses Gaussian processes
to generate synthetic time series by randomly composing kernel functions. These techniques address the
inherent limitations of small training datasets in time series forecasting, enhancing model robustness and
generalization.
Our comprehensive evaluation across 42 datasets establishes Chronos as a benchmark for both in-domain
and zero-shot forecasting, surpassing both traditional models and task-specific deep learning approaches.
Notably, Chronos achieves impressive zero-shot forecasting performance out of the box, without necessi-
tating task-specific adjustments. Its accuracy, coupled with its relatively modest model size, positions it as
a preferable alternative to larger, more computationally demanding models for zero-shot forecasting appli-
cations. By its very nature as a language model operating over a fixed vocabulary, Chronos can seamlessly
2
integrate with future advancements in LLMs, making it an ideal candidate for further development as a
generalist time series model.
The rest of the paper is organized as follows. Section 2 introduces the background on time series forecasting
and language models, and discusses related work. In Section 3, we describe Chronos, our proposed language
modeling framework for time series. Section 4 discusses our data augmentation technique and synthetic time
series generation process. In Section 5, we present our main results and a rigorous analysis of different design
choices. We discuss future directions in Section 6, and conclude the paper in Section 7. Additional material
is presented in the appendices.
Time series forecasting concerns using historical data from a quantity of interest (typically real-valued)
to predict their future values. Formally, given a uniformly-spaced time series x1:C = [x1 , . . . , xC ], we are
interested in predicting the joint distribution of the next H steps, p(xC+1:C+H |x1:C ). In this work, we focus
on univariate forecasting, where the observations are scalars, i.e., xi ∈ R for all i.
Time series forecasting can be addressed with a variety of different methods which can be broadly categorized
into classical forecasting methods and deep learning methods. Classical forecasting methods such as ETS,
ARIMA (Hyndman et al., 2008), Theta (Assimakopoulos & Nikolopoulos, 2000) fit a separate model to each
time series independently (hence referred to as local models). In contrast, deep learning forecasting models
learn across time series in a given dataset (and are called global models). These methods leverage advances
in deep learning, such as RNNs which are used by DeepState (Rangapuram et al., 2018), DeepAR (Salinas
et al., 2020), TimeGrad (Rasul et al., 2021), and transformers which are used by TFT (Lim et al., 2021)
and PatchTST (Nie et al., 2023). Apart from the choice of architecture, these approaches differ in the
way they model the target, with some modeling the density function while others directly predicting a set
of quantiles (Wen et al., 2017; Gasthaus et al., 2019). Nevertheless, not all models produce probabilistic
forecasts: notably, models such as Informer (Zhou et al., 2021) and DLinear (Zeng et al., 2023) only produce
point forecasts.
Large language models (LLMs) have demonstrated impressive performance on various natural language
processing tasks (Brown et al., 2020; Chung et al., 2022; Touvron et al., 2023). Given a sequence of input to-
kens, w1:k = [w1 , . . . , wk ], language models aim to predict the next token, wk+1 , by modeling the conditional
distribution, p(wk+1 |w1:k ). The tokens belong to a vocabulary, V, and may be characters, subwords (Sennrich
et al., 2015), or words, depending on the tokenization scheme used.
Most modern LLMs (Brown et al., 2020; Chung et al., 2022; Touvron et al., 2023) are based on the transformer
architecture (Vaswani et al., 2017). The original transformer architecture is an encoder-decoder model
designed for machine translation. The encoder maps an input sentence of some language to a continuous
representation, and the decoder generates the translation token-by-token using the input representation
and previously decoded tokens. Many popular language models, such as BART (Lewis et al., 2019) and
T5 (Raffel et al., 2020; Chung et al., 2022), belong to this family. Another popular architecture for LLMs is
decoder-only, used in GPT-3 (Brown et al., 2020) and Llama 2 (Touvron et al., 2023), where the model only
attends to tokens up to the current token. LLMs are typically trained on a very large corpus of text with
their number of parameters ranging from millions (Raffel et al., 2020) to hundreds of billions (Chowdhery
et al., 2023). We refer the reader to Zhao et al. (2023) for a recent survey on this area of research.
LLM-based forecasters. Inspired by the success of pretrained LLMs, recent work has shown that LLMs
are general pattern recognizers (Mirchandani et al., 2023) and several methods adapting LLMs to the time
series domain have been developed. One line of work treats numerical time series data as raw text and directly
uses the pretrained LLMs with minimal or no fine tuning to forecast unseen time series. PromptCast (Xue &
Salim, 2023) leverages pretrained LLMs for forecasting by transforming the time series data into text-based
input and output pairs and reformulating the forecasting problem as a question answering task. However,
PromptCast requires dataset-specific templates for converting numerical data to text prompts. Perhaps the
most straightforward LLM-based forecasting model is LLMTime (Gruver et al., 2023), which shows clear
evidence for zero-shot forecasting ability of pretrained LLMs on a variety of benchmark time series datasets.
3
LLMTime proposes a new tokenization scheme that encodes real-valued data as a string of digits after fixing
the numerical precision and scaling the data appropriately. Once encoded as strings, forecasts are obtained
in a zero-shot setting from pretrained LLMs such as GPT-3 (Brown et al., 2020) and Llama 2 (Touvron
et al., 2023). Nevertheless, the use of such compute-hungry models hampers the scalability and practical
utility of LLMTime.
Zhou et al. (2023a) propose a unified one-fits-all model (GPT4TS) for different time series analysis tasks
by using a pretrained GPT-2 model (Radford et al., 2019) as a backbone and only fine-tune the positional
embeddings and the parameters of the layer normalization for each individual task. Instead of using tokenized
input, they directly feed the model with patch embeddings, similar to PatchTST (Nie et al., 2023). Recent
concurrent work, Time-LLM (Jin et al., 2024), repurposes LLMs for time series forecasting by aligning
embeddings of time series patches with text prototypes, and prompting the (frozen) LLM with these aligned
embeddings and a natural language prefix describing the task. Unlike Chronos, both GPT4TS and Time-
LLM require in-domain training or fine-tuning, i.e., they are fine-tuned and tested on each dataset separately.
Furthermore, the aforementioned methods are based on prompting or fine-tuning pretrained LLMs. In
contrast, Chronos trains language models from scratch on a large collection of time series, tokenized via
scaling and quantization.
Zero-shot forecasting. Zero-shot forecasting is the ability of models to generate forecasts for time series
from unseen datasets. Some early work (Orozco & Roberts, 2020; Oreshkin et al., 2021; Jin et al., 2022)
in zero-shot forecasting considers training on a single time series dataset and testing on a different dataset.
ForecastPFN (Dooley et al., 2023) tackles the problem of zero-shot forecasting by training a transformer-
based model purely on synthetic data generated according to predefined trend, seasonalities (daily, monthly,
yearly). The trained transformer model is then used to forecast real-world time series in a zero-shot setting.
In this work, we also propose a method to generate synthetic time series data from Gaussian processes
(Section 4.2); however, we use the synthetic data in combination with real data to train Chronos models,
which improves the overall zero-shot performance. Furthermore, Chronos models are probabilistic, whereas
ForecastPFN can only generate point forecasts.
Recent concurrent works (Rasul et al., 2023; Goswami et al., 2024; Das et al., 2023; Woo et al., 2024)
also develop zero-shot forecasting models by pretraining transformer-based architectures on a large corpus
of time series data. These works operate on the real values of the time series and include time-series-
specific designs such as time features, lags, patching, and real-valued distribution heads, among others. In
contrast, Chronos follows a minimalist approach by tokenizing time series values into a fixed vocabulary
and training existing language model architectures on these tokens without any time-series-specific design or
features. That is, Chronos uses a categorical distribution to model the observations, performing regression
via classification.
Other time series tasks. Similar to Zhou et al. (2023a), recent works have studied general purpose
models applicable across time series tasks including imputation, forecasting, classification and anomaly
detection. Wu et al. (2023) develop a task-generic backbone based on the Inception model (Szegedy et al.,
2015). In order to use the CNN-based Inception model, one dimensional time series is transformed into a
two dimensional image-like representation by essentially segmenting the time series based on the periodicity
and stacking the segments. SimMTM (Dong et al., 2023) is a masked pretraining framework for time series
which learns general time series representations that are then used for forecasting and classification via
fine-tuning. Although we focus on univariate time series forecasting in this work, based on its excellent
performance on unseen time series datasets, we hypothesize that Chronos learns general representations
that can potentially be deployed for tasks beyond forecasting.
In this section we introduce Chronos, a framework adapting existing language model architectures and
training procedures to probabilistic time series forecasting. While both language and time series are sequen-
tial in nature, they differ in terms of their representation — natural language consists of words from a finite
vocabulary, while time series are real-valued. This distinction necessitates specific modifications to existing
4
language modeling frameworks, especially concerning tokenization, to make them applicable to time series
data. Nevertheless, since existing transformer models have excelled on language tasks, our design philosophy
involves making minimal changes to the model architectures and training procedure.
Consider a time series x1:C+H = [x1 , . . . , xC+H ], where the first C time steps constitute the historical
context, and the remaining H represent the forecast horizon. Language models operate on tokens from a
finite vocabulary, so using them for time series data requires mapping the observations xi ∈ R to a finite set
of tokens. To this end, we first scale and then quantize observations into a fixed number of bins.
Scaling. The scale of time series can differ significantly even within a single dataset. This poses optimiza-
tion challenges for deep learning models. Therefore, individual time series are normalized to facilitate better
optimization. In the case of Chronos, the goal of normalization is to map the time series values into a
suitable range for quantization. A common normalization technique involves applying an affine transforma-
tion to the time series, i.e., x̃i = (xi − m)/s. Several popular normalization schemes, such as mean scaling,
standard scaling and min-max scaling, can be obtained by appropriately choosing m and s. We opt for mean
scaling, a method that has proven effective in deep learning models commonly used for practical time series
applications (Salinas et al., 2020), but other approaches are viable and only require minimal changes. Mean
scaling normalizes individual entries of the time series by thePCmean of the absolute values in the historical
context. Specifically, this involves setting m = 0 and s = C1 i=1 |xi |.
Quantization. The scaled time series x̃1:C+H = [x̃1 , . . . , x̃C , . . . , x̃C+H ], is still real-valued and cannot
be processed directly by language models. To convert these real values into discrete tokens, we employ
quantization. Formally, we select B bin centers c1 < . . . < cB on the real line, and B − 1 edges bi separating
them, ci < bi < ci+1 , for i ∈ {1, . . . , B − 1}. The quantization function q : R → {1, 2, . . . , B}, and
dequantization d : {1, 2, . . . , B} → R, are then defined as
1 if − ∞ ≤ x < b1 ,
2 if b1 ≤ x < b2 ,
q(x) = . and d(j) = cj , (1)
..
B if bB−1 ≤ x < ∞,
respectively. The positioning of bin centers and edges can either be data-dependent or uniform (Rabanser
et al., 2020). Quantile binning, a type of data-dependent binning, exploits the cumulative distribution func-
tion (CDF) of the training datapoints to construct bins such that approximately equal number of datapoints
are assigned to each bin. In contrast, uniform binning selects bin centers uniformly within some interval
[l, r]. Since the distribution of values for unseen downstream datasets can differ significantly from the train-
ing distribution, we opt for uniform binning in our experiments, but other quantization techniques can be
used. We refer the reader to Rabanser et al. (2020) for a detailed discussion on quantization schemes for
time series. A potential limitation of this approach is that the prediction range is restricted between [c1 , cB ],
making it theoretically infeasible to model time series with a strong trend. We explore this further in a
practical setting in Section 5.7.
Apart from the time series tokens {1, 2, . . . , B}, we include two special tokens, commonly used in language
models, into the time series vocabulary, Vts : PAD and EOS. The PAD token is used to pad time series of different
lengths to a fixed length for batch construction and to replace missing values. The EOS token is appended
to the quantized and padded time series to denote the end of the sequence. While the use of an EOS token
is not strictly necessary in the case of time series, it makes training and inference using popular language
modeling libraries convenient. The sequences of tokens from Vts can readily be processed by language models
(both encoder-decoder and decoder only models), to train them as usual. A common approach in time series
modeling is to incorporate time and frequency information, through features such as day-of-week, week-
of-year, and so on. Perhaps counter-intuitively, in Chronos, we ignore time and frequency information,
treating the “time series” simply as a sequence.
5
We primarily focus on the variants of the encoder-decoder T5 model (Raffel et al., 2020). Additionally, we
conduct an experiment with the GPT-2 (Radford et al., 2019) model to demonstrate that our approach
can be straightforwardly extended to decoder-only models. No modifications are required to the language
model architecture, except adjusting the vocabulary size to |Vts |, which depends on the number of bins used
for quantization and may be different from the vocabulary size of the original language model. Concretely,
adjusting the vocabulary size entails truncating (or extending) the input and output embedding layers of
the language model.
As typical in language models, we use the categorical distribution over the elements of Vts as the output
distribution, p(zC+h+1 |z1:C+h ) where z1:C+h is the tokenized time series. Chronos is trained to minimize
the cross entropy between the distribution of the quantized ground truth label and the predicted distribution.
Formally, the loss function for a single tokenized time series (also accounting for EOS tokens) is given by,
X |V
H+1 Xts |
where pθ (zC+h+1 = i|z1:C+h ) denotes the categorical distribution predicted by the model parameterized by
θ. In practice, the loss is averaged over a batch of time series during training.
Note that the categorical cross entropy loss (Eq. 2) is not a distance-aware objective function, i.e., it does not
explicitly recognize that bin i is closer to bin i + 1 than to i + 2. Instead, the model is expected to associate
nearby bins together, based on the distribution of bin indices in the training dataset. In other words,
Chronos performs regression via classification (Torgo & Gama, 1997). This is unlike typical probabilistic
time series forecasting models, which either use parametric continuous distributions such as Gaussian and
Student’s-t (Salinas et al., 2020) or perform quantile regression (Wen et al., 2017; Lim et al., 2021).
Opting for a categorical output distribution offers two key advantages. Firstly, it requires no modification
to the language model architecture or training objective, enabling the use of popular language modeling
libraries and the utilities they provide out of the box (Wolf et al., 2020). Secondly, it imposes no restrictions
on the structure of the output distribution, allowing the model to learn arbitrary distributions including
multimodal ones. This flexibility proves especially valuable for a pretrained model, as time series datasets
from diverse domains may follow distinct output distribution patterns.
3.3 Forecasting
Chronos models are probabilistic by design and multiple realizations of the future can be obtained by
autoregressively sampling from the predicted distribution, pθ (zC+h+1 |z1:C+h ), for h ∈ {1, 2, . . . , H}. These
sample paths come in the form of token IDs that need to be mapped back to real values and then unscaled
to obtain the actual forecast. The dequantization function d from Eq. (1) maps the predicted tokens to real
values: these are then unscaled by applying the inverse scaling transformation, which in the case of mean
scaling involves multiplying the values by the scale s.
4 Data Augmentation
The quality and quantity of public time series data pales in comparison to the natural language processing
(NLP) domain, which benefits from ample high-quality text datasets such as WikiText-103 (Merity et al.,
2016), C4 (Raffel et al., 2020), and The Pile (Gao et al., 2020). This poses challenges for training models
intended for zero-shot forecasting, which rely on large-scale time series data with diverse patterns. To address
this issue, we propose enhancing the diversity of training data by generating mixup augmentations from real
datasets and supplementing training with synthetic data.
6
4.1 TSMix: Time Series Mixup
1
sha1_base64="Cm4OIrFr02Jt0L/KmjA/QmLUbz4=">AAAyOXicjVvLbtzIFdXkOVFeMwmQTTZEZGNmAllQ28YkmwCjt2S1pNb74fYYbPZtNi2+xKqmJDd6fiPb5DvyJVlmF2SbH8itKhbvLYrSRIAl1jnFy2LVuXVPs+lBHkdCLi//85Mf/PBHP/7JTz/92fzPf/HLX/36s89/cyaySRHAaZDFWXEx8AXEUQqnMpIxXOQF+MkghvPB9Zriz0soRJSlJ/I+h3eJH6bRKAp8idC7foxdh/77zl+Wl75+/9nC8tKy/vEeHnSqg4W56qf3/vPO7/rDLJgkkMog9oV421nO5bupX8goiGE2358IyP3g2g/hLR6mfgLi3VSPeuY9R2TojbIC/6XS0yg/Y+onQtwnA+yZ+HIsmpwC27i3Ezn687tplOYTCWlgLjSaxJ7MPDUF3jAqIJDxPR74QRHhWL1g7Bd+IHGi5ufnn6sfb3/j3NtbOdn21jc2d/Z3TnYO9o89Tc23jWQR/6r7EIuDZIYxvD2/uPYEXgjnWXjZyAv83ByrWy5gBEURpaEa1TAqI2G7jaJwUgDeUQq3QZYkfjqc9hGMYSRn02kfEu/LLh5/NZs96BPgQkBhe63pVlu/IgrHdbAj1WjrJbPc9jnJ8rYeg0zKLLGdVnXrQb/qvn3bzX+sx8D2GDzWI7A9gsd6DG2PoeqBy7CNdxerO/R8D/urVYcRJsvQw7lJ3Bh4rMDZ2847jDIYeQsdFQSjbOpFMauGmoJFL85uoXgRYOotzfcxpJ5WGC10pmYBv+tja6oDtJ2Ow42kHy95mygGITFj1NoLtWLIm4ibNuJmM6Km5W1mr7nwsrqq8GwnD++oary0Z9xM/CGdsvBq4fWD0xbrc+zRKx7qtb6dY6PqJ6cDlW8GX6WAMx8tAeyEmLOP7dnHLWcf2bN0Rt9mdZYt1RNjri70zNQ5+MjUNAOOC4BmSBZv4dXDiDRrLParh7H91ANcBHVyy5TBjbnnjZul7760ob96OkiUermPlByDiASLk2OgL1Wk/zPQJM+h8NRoTJCNejCmh7MCK17h39LqNYK9ePHCL7No6E2E2uCikZdnQkRYkkzoPPYxAav4jy6srzblHPOxZaYUY06v+jyuj0dusgq0Vgda+95AeM9pCHonN32r6dZwPSIUnKVtqBcvHhUbjs6PwwyL0DhpuU/kzOjqTk/eKAv14E5XbKiVllA2bez18CbqWE9vKSfOSSvfe9KDSUX1yurOmfoUaoarjp5aFHN+U729+vyee7690/oCOGp1/OiAK8FBFCuxxuoAywJ2UEdVvFGcZYWm9ZHh9WHVAalBMu00a5YsMBFm077yD4EfT9ebHUo/joa8w3tzXCRTQ80ehAQh20/QzKy+I8iFqpS5iOIsrarcEYbIEq/0i8jHbLX6BulPVeQ7mWZFglGf9RF6NrPTWTRon5iBywyICVwmIGboMkNiwGWAmJHLjIgJXSYkZuwyY2Iil4mI+eAyH4i5dplrYmKXiWdaxkXiRQIzFs368F5tdmYFF70PEyG9YZZ+IT3ll1GO92rncRbGS6rYqRs7patmLpMRk7tMTsyNy9wQU7hMQYxwGUGMdBlJzMRlJsSULlMSc+syt8TcucwdMfcuc0/MR5f5ODNm0SYA1ves3t7LKkmmJpUGI5Y29bix/uossT10m/GM4/CAYJYbZUAwS4xySDDLihIIZilRjghm+VCGBLNkKMcEs0woJwSzNCg/EMxyoLwmmCVAGRMcMzghOGEwm2g+wxnBTMxlTjBTcnlDMJNxWRDMNFwKggVfVIJl+5xw6ZYEM92WtwQz0ZZ3BDPFlvcEM7mWHwm2Wt2IQX3u1p8ZixbdghFd674MRnmtOzMY+bXuzWA02Lo7gxFi6/4MRo2tOzQYSbbu0WB02bpLI/foPg1Goa07NRiZtu7VYLTa3K0tl7hcwrlHd2Iw0m3di8Hot3U3BiPi1v0YjJJbd2Qwcm7dk8FounVXBiPs1n0ZjLpbd2YwEm/dm8HovHV3BiP21v0ZjOIf36ExF4ooqB1KskL5sUJpk6wSvMrgNYLXGLxO8DqDNwjeYPAmwZsM3iJ4i8HbBG8zeIfgHQa/IfgNg3cJ3mVwl+Aug/cI3mPwPsH7DD4g+IDBPYJ7DD4k+JDBRwQfMfiY4GMGnxB8wuBTgk8ZfEbwGYPPCT5n8AXBFwy+JPiSwVcEXz2+vbqiA6M6ptEVpl8tPcatcm7N5dY4t+5y65zbcLkNzm263Cbntlxui3PbLrfNuR2X2+HcG5d7w7ldl9vlXNflupzbc7k9zu273D7nDlzugHM9l+tx7tDlDjl35HJHnDt2uWPOnbjcCedOXe6Uc2cud8a5c5c759yFy11w7tLlLjl35XJW9mfcQpQfQX+OwM+uy/W5ZZbC1H6etVgyMVA/oaJRe2KFu364rGCGDAxCPkS7EETIfWjvgQh5jrIaCTkN7TMQIX+h3QUi5Cq0p0CEvIR2EoiQg9D+ARHyDdo1IEJuQXsFRGI2DwYhZ6B9ASIpmz+DkAvQHgARqv268iNCFV/Xe0Sozusqj4hgE24QqulltSxsUUqDUP3W1RsRqtq6ZiNCtVpXakSoQuv6jEibG3VtaOnH+Vitt/5bK7AcVOLQurAgfdSiJxMVZb6pQsYcEJElECpc/yVYS1LJ0QIYEBH8TZCIwkSdqv8SbIVbiba+kemUj3+qxGpbKNaAWijUIbupqRKobaFAR9RCcYbUQmGOqYXDZWNFQX6gForxms3NVImwvvOpEqBt4WSyWUTxZWxKpkp0toWiu6EWCq5gMzVVQqsnaKpEZls40WyaUWAltVBct9RCYd1RC0V1Ty0U1MdZ9c0Z1tk7g+saizqj2qorKyJUUXU9RYTqqK6iiFD11LUTEaqZumIiQpVS10lEqD7q6ogIVUVdExGhWqgrISJUAXX9Q4Tqnq56iFC107UOEapxusIhQpVN1zVEqJ7paoYIVTFdwxCh2qUrFyJUsXS9QoTqlK5SiFB10rUJEapJuiIhQpVI1yFEqP7o6oMIVR1dcxChWqMrDSJXbAWpLgx4WUh642oj7uMRmz2b+orpVulf31yVw4o7NnmsVXQCqVBfKK9DEPsFoKjGK2oHwisasydGkXpUCmmQDaM0xGD+JFaIGNXHyWwq1FPeY5CPBRhk8fD7wgzuZpiEzSe1qdDfNJq6WcXTT6mrW5PGX6aCqV+uWoz0L9csRhkg1y1GOSA3LEZZIDctRnkgtyxGmSC3LUa5IHcsRtkg31iM8kHuWowyQnYtRjkh9yxGWSH3LUZ5IQ8sRpkhexaj3JCHFqPskEcWo/yQxxajDJEnFqMckacWoyyRZxajPJHnFqNMkRcWo1yRlxajbJFXFjOODIW8Vfj52LCh/ZwbOB83wlUGky7CNQaTNMJ1BpM6wg0Gk0DCTQaTRsItBpNMwm0Gk1LCHQaTWMI3DCa9hLsMJsmEXQaTasI9BpNwwn0Gk3bCAwaTfMIeg0lB4SGDSUThEYNJR+Exg0lK4QmDSU3hKYNJUOEZg0lT4TmDSVbhBYNJWeElg0lc4RWDrePHra2yaqJ+ijJg4hKrhJK2xBqhJC2xTqhW1nNvXX+TMRHg+Z4A6eGlYxh6G4veAAJf4XIcCe82m8RDhLAFntDfe6CXnBSeegMoizGQemsG7nL0lvrLXPvF/CZdkdQptgglcYptQkmbYodQkqZ4QygpU+wSSsIUXUJJl2KPUJKl2CeUVCkOCCVRih6hpElxSChJUhwRSooUx4SSIMUJoaRHcUooyVGcEUpqFOeEkhjFBaGkRXFJKElRXBFaP3JJ0feB/gjhm4ctlQkEcgBd1/wre7hCLZTqKrVQomvUQmmuUws3uw1qoYg2qYXi2aIWimabWiiWHWqhSN5QC8WxSy0URZdaKIY9aqEI9qmFi39ALVz0HrVwsQ+phYt8RC1c3GNq4aKeUAsX85RauIhn1MLFO6cWLtoFtXCxLqmFi3TFrlc5rcplqSUDvmTSOC7cUlT+6pf6MIkNuujdRnKcTaSHdse7xZKWQ+EaIiBH5Lih6vKy1oDu+MAIgrZL0PBLoA0TNBwTaMsEDc8E2jRBwzWBtk3Q8E2gjRM0nBNo6wQN7wTaPEHDPYG2T9DwT6ANFDQcFGgLBQ0PBdpEQcNFgbZR0PBRoI0UNJwUaCsFDS8F2kxBw02BtlPQ8FOgDRU0HBVoSwUNTwXaVEHDVYG2VdDwVaCNFTScFWhrBQ1vBdpcQcNdgbZX0PBXoA0WMIeFnxSw5MhiAt4kHUIR36uXloa+9L0QUiiw2qh2JFDpg4kqPa5sc9V1Ns3fT/tFMtUNXfhUVEjyqIiw5Dnn128gDu51udOvgaiLYH1sxLZviIx9iZ/U3Us4PXu8Z2/WNpgkG0L81I3oDvWdmNaD61Sdek91ymUUD6Hq2deNevT1GbhNyCwY+0K9f+tPZKY/QUHhjLDxGmxu+tRjrE55OIAhOP1Ms6VfgQRuOrafaaIWAv0Mze0c+3nsBzCr36jpVsDMe+5Vx+70uudvzOqKt9EcSFewl3a6TfZoxmt7Y9dMcjbHDTIuZva5m0sUEM7qJ2lNKpB0j6oVjSIomqFFNpKJf0c9LdDsh8Ui0y8xmYdsD6Pk8UTd/Uf1JMBld7sz/gbTbvfBAp75BY1ANZrxJf7xC1z7ImM9jx8swFpWEq0aSqDnWTwq/EQ9kBrfZgUaVOHfC+9Z99uXz9TbO/rd9Ulq3mUVOa6/0G+PPetDHLM+9oHoc28VCyCmfKp+3WO+Q6LeYlMu2ARlvdWLqNkk1DVTm+JIwqIOLzJvmIEKdxtdRzkMI3+p8SJzViSxeng/m3a/XZ61kFkKiuu0cfJWn/eyjcsVk7cwWgvdb/tROpL3zdQxr6jiKvd8lSzHgHut8ENQr6+mWWXoJdwteWvjTKjpyZQBDMbeOn72TeEL4Q2y7Hpp3nmcc5Cr3Tkr/ogaL0I9APzbX1RHT3VU+6TpiEftIbVasZv+/UiPExTUiXrDLwbZ9weYZ3F2OyjAv55//9lCp/n/Jh4enL1c6ny99Prw9cI3q9X/qfh07vdzf5j7cq4z96e5b+a253pzp3PB3M3cX+f+Nvf3zj86/+r8u/Mf0/UHn1Tn/HbO+en893+i+mpP</latexit>
=
0.6
mona et al., 2021; Zhou et al., 2023b) have extended Mixup to
<latexit
sha1_base64="XJJUhO828DzhtMUqHH6k4j8Dqe0=">AAAyOXicjVvLbtzIFdVMXhPlNZMA2WRDRDZmJpAFtW0k2QQYvSWrJbXeD7fHYLNvs2nxJVY1JbnR8xvZJt+RL8kyuyDb/EBuVbF4b1GUJgIssc4pXharzq17mk0P8jgScnn5n598+oMf/ujHP/nsp/M/+/kvfvmrz7/49ZnIJkUAp0EWZ8XFwBcQRymcykjGcJEX4CeDGM4H12uKPy+hEFGWnsj7HN4lfphGoyjwJULv+jF2HfrvO39ZXnr1/vOF5aVl/eM9POhUBwtz1U/v/Red3/aHWTBJIJVB7AvxtrOcy3dTv5BREMNsvj8RkPvBtR/CWzxM/QTEu6ke9cx7jsjQG2UF/kulp1F+xtRPhLhPBtgz8eVYNDkFtnFvJ3L053fTKM0nEtLAXGg0iT2ZeWoKvGFUQCDjezzwgyLCsXrB2C/8QOJEzc/PP1c/3v7Gube3crLtrW9s7uzvnOwc7B97mppvG8ki/lX3IRYHyQxjeHt+ce0JvBDOs/CykRf4uTlWt1zACIoiSkM1qmFURsJ2G0XhpAC8oxRugyxJ/HQ47SMYw0jOptM+JN5XXTz+ejZ70CfAhYDC9lrTrbZ+RRSO62BHqtHWS2a57XOS5W09BpmUWWI7rerWg37Vffu2m/9Yj4HtMXisR2B7BI/1GNoeQ9UDl2Eb7y5Wd+j5HvZXqw4jTJahh3OTuDHwWIGzt513GGUw8hY6KghG2dSLYlYNNQWLXpzdQvEiwNRbmu9jSD2tMFroTM0CftfH1lQHaDsdhxtJP17yNlEMQmLGqLUXasWQNxE3bcTNZkRNy9vMXnPhZXVV4dlOHt5R1Xhpz7iZ+EM6ZeHVwusHpy3W59ijVzzUa307x0bVT04HKt8MvkoBZz5aAtgJMWcf27OPW84+smfpjL7N6ixbqifGXF3omalz8JGpaQYcFwDNkCzewquHEWnWWOxXD2P7qQe4COrklimDG3PPGzdL331lQ3/9dJAo9XIfKTkGEQkWJ8dAX6lI/2egSZ5D4anRmCAb9WBMD2cFVrzCv6XVawR78eKFX2bR0JsItcFFIy/PhIiwJJnQeexjAlbxH11YX23KOeZjy0wpxpxe9XlcH4/cZBVorQ609r2B8J7TEPRObvpW063hekQoOEvbUC9ePCo2HJ0fhxkWoXHScp/ImdHVnZ68URbqwZ2u2FArLaFs2tjr4U3UsZ7eUk6ck1a+96QHk4rqldWdM/Up1AxXHT21KOb8pnp79fk993x7p/UFcNTq+NEBV4KDKFZijdUBlgXsoI6qeKM4ywpN6yPD68OqA1KDZNpp1ixZYCLMpn3lHwI/nq43O5R+HA15h/fmuEimhpo9CAlCtp+gmVl9R5ALVSlzEcVZWlW5IwyRJV7pF5GP2Wr1DdKfqsh3Ms2KBKM+6yP0bGans2jQPjEDlxkQE7hMQMzQZYbEgMsAMSOXGRETukxIzNhlxsRELhMR88FlPhBz7TLXxMQuE8+0jIvEiwRmLJr14b3a7MwKLnofJkJ6wyz9UnrKL6Mc79XO4yyMl1SxUzd2SlfNXCYjJneZnJgbl7khpnCZghjhMoIY6TKSmInLTIgpXaYk5tZlbom5c5k7Yu5d5p6Yjy7zcWbMok0ArO9Zvb2XVZJMTSoNRixt6nFj/dVZYnvoNuMZx+EBwSw3yoBglhjlkGCWFSUQzFKiHBHM8qEMCWbJUI4JZplQTghmaVB+IJjlQHlNMEuAMiY4ZnBCcMJgNtF8hjOCmZjLnGCm5PKGYCbjsiCYabgUBAu+qATL9jnh0i0JZrotbwlmoi3vCGaKLe8JZnItPxJstboRg/rcrT8zFi26BSO61n0ZjPJad2Yw8mvdm8FosHV3BiPE1v0ZjBpbd2gwkmzdo8HosnWXRu7RfRqMQlt3ajAybd2rwWi1uVtbLnG5hHOP7sRgpNu6F4PRb+tuDEbErfsxGCW37shg5Ny6J4PRdOuuDEbYrfsyGHW37sxgJN66N4PReevuDEbsrfszGMU/vkNjLhRRUDuUZIXyY4XSJlkleJXBawSvMXid4HUGbxC8weBNgjcZvEXwFoO3Cd5m8A7BOwx+Q/AbBu8SvMvgLsFdBu8RvMfgfYL3GXxA8AGDewT3GHxI8CGDjwg+YvAxwccMPiH4hMGnBJ8y+IzgMwafE3zO4AuCLxh8SfAlg68Ivnp8e3VFB0Z1TKMrTL9aeoxb5dyay61xbt3l1jm34XIbnNt0uU3ObbncFue2XW6bczsut8O5Ny73hnO7LrfLua7LdTm353J7nNt3uX3OHbjcAed6Ltfj3KHLHXLuyOWOOHfscsecO3G5E86dutwp585c7oxz5y53zrkLl7vg3KXLXXLuyuWs7M+4hSg/gv4cgZ9dl+tzyyyFqf08a7FkYqB+QkWj9sQKd/1wWcEMGRiEfIh2IYiQ+9DeAxHyHGU1EnIa2mcgQv5CuwtEyFVoT4EIeQntJBAhB6H9AyLkG7RrQITcgvYKiMRsHgxCzkD7AkRSNn8GIRegPQAiVPt15UeEKr6u94hQnddVHhHBJtwgVNPLalnYopQGofqtqzciVLV1zUaEarWu1IhQhdb1GZE2N+ra0NKP87Fab/23VmA5qMShdWFB+qhFTyYqynxThYw5ICJLIFS4/kuwlqSSowUwICL4myARhYk6Vf8l2Aq3Em19I9MpH/9UidW2UKwBtVCoQ3ZTUyVQ20KBjqiF4gyphcIcUwuHy8aKgvxALRTjNZubqRJhfedTJUDbwslks4jiy9iUTJXobAtFd0MtFFzBZmqqhFZP0FSJzLZwotk0o8BKaqG4bqmFwrqjForqnlooqI+z6pszrLN3Btc1FnVGtVVXVkSooup6igjVUV1FEaHqqWsnIlQzdcVEhCqlrpOIUH3U1RERqoq6JiJCtVBXQkSoAur6hwjVPV31EKFqp2sdIlTjdIVDhCqbrmuIUD3T1QwRqmK6hiFCtUtXLkSoYul6hQjVKV2lEKHqpGsTIlSTdEVChCqRrkOIUP3R1QcRqjq65iBCtUZXGkSu2ApSXRjwspD0xtVG3McjNns29RXTrdK/vrkqhxV3bPJYq+gEUqG+UF6HIPYLQFGNV9QOhFc0Zk+MIvWoFNIgG0ZpiMH8SawQMaqPk9lUqKe8xyAfCzDI4uH3hRnczTAJm09qU6G/aTR1s4qnn1JXtyaNv0wFU79ctRjpX65ZjDJArluMckBuWIyyQG5ajPJAblmMMkFuW4xyQe5YjLJBvrEY5YPctRhlhOxajHJC7lmMskLuW4zyQh5YjDJD9ixGuSEPLUbZIY8sRvkhjy1GGSJPLEY5Ik8tRlkizyxGeSLPLUaZIi8sRrkiLy1G2SKvLGYcGQp5q/DzsWFD+zk3cD5uhKsMJl2EawwmaYTrDCZ1hBsMJoGEmwwmjYRbDCaZhNsMJqWEOwwmsYRvGEx6CXcZTJIJuwwm1YR7DCbhhPsMJu2EBwwm+YQ9BpOCwkMGk4jCIwaTjsJjBpOUwhMGk5rCUwaToMIzBpOmwnMGk6zCCwaTssJLBpO4wisGW8ePW1tl1UT9FGXAxCVWCSVtiTVCSVpinVCtrOfeuv4mYyLA8z0B0sNLxzD0Nha9AQS+wuU4Et5tNomHCGELPKG/90AvOSk89QZQFmMg9dYM3OXoLfWXufaL+U26IqlTbBFK4hTbhJI2xQ6hJE3xhlBSptgllIQpuoSSLsUeoSRLsU8oqVIcEEqiFD1CSZPikFCSpDgilBQpjgklQYoTQkmP4pRQkqM4I5TUKM4JJTGKC0JJi+KSUJKiuCK0fuSSou8D/RHCNw9bKhMI5AC6rvlX9nCFWijVVWqhRNeohdJcpxZudhvUQhFtUgvFs0UtFM02tVAsO9RCkbyhFopjl1ooii61UAx71EIR7FMLF/+AWrjoPWrhYh9SCxf5iFq4uMfUwkU9oRYu5im1cBHPqIWLd04tXLQLauFiXVILF+mKXa9yWpXLUksGfMmkcVy4paj81S/1YRIbdNG7jeQ4m0gP7Y53iyUth8I1RECOyHFD1eVlrQHd8YERBG2XoOGXQBsmaDgm0JYJGp4JtGmChmsCbZug4ZtAGydoOCfQ1gka3gm0eYKGewJtn6Dhn0AbKGg4KNAWChoeCrSJgoaLAm2joOGjQBspaDgp0FYKGl4KtJmChpsCbaeg4adAGypoOCrQlgoangq0qYKGqwJtq6Dhq0AbK2g4K9DWChreCrS5goa7Am2voOGvQBssYA4LPylgyZHFBLxJOoQivlcvLQ196XshpFBgtVHtSKDSBxNVelzZ5qrrbJq/n/aLZKobuvCpqJDkURFhyXPOr99AHNzrcqdfA1EXwfrYiG3fEBn7Ej+pu5dwevZ4z96sbTBJNoT4qRvRHeo7Ma0H16k69Z7qlMsoHkLVs68b9ejrM3CbkFkw9oV6/9afyEx/goLCGWHjNdjc9KnHWJ3ycABDcPqZZku/AgncdGw/00QtBPoZmts59vPYD2BWv1HTrYCZ99yrjt3pdc/fmNUVb6M5kK5gL+10m+zRjNf2xq6Z5GyOG2RczOxzN5coIJzVT9KaVCDpHlUrGkVQNEOLbCQT/456WqDZD4tFpl9iMg/ZHkbJ44m6+4/qSYDL7nZn/A2m3e6DBTzzCxqBajTjS/zjF7j2RcZ6Hj9YgLWsJFo1lEDPs3hU+Il6IDW+zQo0qMK/F96z7rcvn6m3d/S765PUvMsqclx/od8ee9aHOGZ97APR594qFkBM+VT9usd8h0S9xaZcsAnKeqsXUbNJqGumNsWRhEUdXmTeMAMV7ja6jnIYRv5S40XmrEhi9fB+Nu1+uzxrIbMUFNdp4+StPu9lG5crJm9htBa63/ajdCTvm6ljXlHFVe75KlmOAfda4YegXl9Ns8rQS7hb8tbGmVDTkykDGIy9dfzsm8KXwhtk2fXSvPM45yBXu3NW/AE1XoR6APi3v6iOnuqo9knTEY/aQ2q1Yjf9+5EeJyioE/WGXwyy7w8wz+LsdlCAfz3//vOFTvP/TTw8OHu51Pnj0uvD1wvfrFb/p+Kzud/N/X7uq7nO3J/mvpnbnuvNnc4Fczdzf53729zfO//o/Kvz785/TNdPP6nO+c2c89P57/8ADZ9qTA==</latexit>
the time series domain. Building upon these works, we propose k=2
<latexit sha1_base64="UygptRWTAicL1U11cx2bffRF1jk=">AAAyL3icjVvbbtzIEdVubhvltpsAeckLEdnY3UAWLNtI8hJgdZeskTTSjK6W1+Bwaji0eDO7hyN5MPsHeU2+I18T5CXIa/4i1d1sVjVFaSPAEuuc7mJfTnXVcOhBHkdCPn/+r08+/cEPf/Tjn3z208Wf/fwXv/zV51/8+kxkkyKA0yCLs+Ji4AuIoxROZSRjuMgL8JNBDOeDmw3Fn5dQiChL+/Iuh7eJH6bRKAp8iVDv5i8v3n2+9Hzluf7x7l+sVhdLC9VP990Xq7+9HmbBJIFUBrEvxJvV57l8O/MLGQUxzBevJwJyP7jxQ3iDl6mfgHg702Ode08RGXqjrMB/qfQ0ynvM/ESIu2SALRNfjkWTU2Ab92YiR39+O4vSfCIhDcyNRpPYk5mnJu4NowICGd/hhR8UEY7VC8Z+4QcSl2dxcfGp+vEOt869g7X+rre5tb13uNffOzrseZpabBvJMv5V8xDLg2SOPrwDv7jxBN4IV1d42cgL/NxcqykXMIKiiNJQjWoYlZGwzUZROCkAZ5TCNMiSxE+Hs2sEYxjJ+Wx2DYn3VQevv57P77UJcCOgsK02tNXWrojCce3sRBltrWSW2zb9LG9rMcikzBLbaF1b99pV8/ZtM/+hFgPbYvBQi8C2CB5qMbQthqoFbsMuzi5WM/R8D9urXYcRhsjQw7VJXB94rcD5m9W36GUw8pZWlRP0sq03xewaagqWvTibQvEswIBbWbxGl3pZYbS0OjMb+N01WjPtoK07DjeSfrzibaMYhMSIUXsv1I4hbzxuW4/bTY+altPM3nPpRXVX4dlGHs6oMl7YHh8m/pC6LL1cenWv23Ldx1695K5e6en0jKofXQ5Uvhl8FQLOerQ4sAtievds715L7xPbS0f0NKujbKVeGHN3oVemjsEHlqbpcFwANF0yf0sv73ukVWO+X9737ace4Caozi1LBh/MnLc+rHz3lXX99eNOotTLfaTkGEQkmJ8cHX2lPP2fjiZ5DoWnRmOcbNWDMS2cHVjzCn9Ku9dw9uzZM7/MoqE3EeqAi0ZengkRYSIyrvPYxwCs/D+4sb46lHOMx5aVUozpXrV5WB8PTLJytFE72vheRzjnNAR9kpu21XJruB4RCs7S1tWzZw+KDUfnx2GGSWictMwTOTO6utGjE2Wu7s10zbpaa3Flw8beDydR+3r8SOk7nda+t9O9RUX1ymrmTH0KNcNVV49tiunfVG+37t91+9uZ1jfAUavrBwdcCQ6iWIk1VheYFrCBuqr8jeIsKzStrwyvL6sGSA2S2WozZ8kCA2E+u1b1Q+DHs81mg9KPoyFv8M5cF8nMUPN7LkHI9g6amdczglyoTJmLKM7SKsudoIss8Uq/iHyMVqtvkP5Meb6VaVYk6PXJNUJP5nY5iwbtEzNwmQExgcsExAxdZkgMuAwQM3KZETGhy4TEjF1mTEzkMhEx713mPTE3LnNDTOwy8VzLuEi8SGDEYok+vFOHndnBZe/9REhvmKVfSk/VyyjHO3XyOBvjJZXv1PWd0l0zl8mIyV0mJ+aDy3wgpnCZghjhMoIY6TKSmInLTIgpXaYkZuoyU2JuXeaWmDuXuSPmo8t8nJti0QYA5vesPt7LKkhmJpQGIxY29bgx/+oosS20zXjGcXhAMIuNMiCYBUY5JJhFRQkEs5AoRwSzeChDglkwlGOCWSSUE4JZGJTvCWYxUN4QzAKgjAmOGZwQnDCYLTRf4YxgJuYyJ5gpufxAMJNxWRDMNFwKggXfVIJl+5pw6ZYEM92WU4KZaMtbgpliyzuCmVzLjwRbrW7FoD5368+MRYtuwYiu9VwGo7zWkxmM/FrPZjAabD2dwQix9XwGo8bWExqMJFvPaDC6bD2lkXvwnAaj0NaTGoxMW89qMFptntaWS1wu4dyDJzEY6baexWD023oagxFx63kMRsmtJzIYObeeyWA03XoqgxF267kMRt2tJzMYibeezWB03no6gxF76/kMRvEPn9AYC0UU1BVKskbxsUZhk6wTvM7gDYI3GLxJ8CaDtwjeYvA2wdsM3iF4h8G7BO8yeI/gPQa/Jvg1g/cJ3mdwh+AOgw8IPmDwIcGHDD4i+IjBXYK7DD4m+JjBJwSfMLhHcI/BfYL7DD4l+JTBZwSfMfic4HMGXxB8weBLgi8ZfEXw1cPHqys6MKpjGl1j+tXSY9w65zZcboNzmy63ybktl9vi3LbLbXNux+V2OLfrcruc23O5Pc69drnXnNt3uX3OdVyuw7kDlzvg3KHLHXLuyOWOONd1uS7njl3umHMnLnfCuZ7L9TjXd7k+505d7pRzZy53xrlzlzvn3IXLXXDu0uUuOXflclb2Z7yEKD+C/hyBn12f133LLIWZ/TxrsWRioOuEkkZdEyvcrYfLCmbIwCBUh+gqBBGqPnTtgQjVHGU1Eqo0dJ2BCNUXurpAhKoKXVMgQrWEriQQoQpC1w+IUN2gqwZEqFrQtQIiMVsHg1BloOsCRFK2fgahKkDXAIhQ7teZHxHK+DrfI0J5Xmd5RARbcINQTi+rbWGbUhqE8rfO3ohQ1tY5GxHK1TpTI0IZWudnRNqqUbcMLf04H6v91n9rBZaDShxaFxakj1r0ZKKiYj8ZDFUPc0FElkCocP2XYC1JJUcLoENE8DdBIgoT1VX/JdgKtxJtPZHZjI9/psRqLRRrQBYKdcgmNVMCtRYKdEQWijMkC4U5JguHy8aKgnxPForxhq3NTImwnvlMCdBauJhsFVF8GVuSmRKdtVB0H8hCwRVspWZKaPUCzZTIrIULzZYZBVaSheKakoXCuiULRXVHFgrq47z65gzz7K3BdY5FnVFu1ZkVEcqoOp8iQnlUZ1FEKHvq3IkI5UydMRGhTKnzJCKUH3V2RISyos6JiFAu1JkQEcqAOv8hQnlPZz1EKNvpXIcI5Tid4RChzKbzGiKUz3Q2Q4SymM5hiFDu0pkLEcpYOl8hQnlKZylEKDvp3IQI5SSdkRChTKTzECKUf3T2QYSyjs45iFCu0ZkGkSu2g5QXBjwtJN1xdRBf4xVbPRv6iulU4V9ProphxfVMHGsV9SEV6gvlTQhivwAU1XhNnUB4R1PsiVGkHpVCGmTDKA3RmT+JFSJG9XUynwn1lLcH8iEHgywefp+bwe0cg7D5pDYV+ptGkzcrf/opdTU1aerLVDD1y3WLkf7lhsUoAuSmxSgG5JbFKArktsUoDuSOxSgS5K7FKBbknsUoGuRri1E8yH2LUUTIjsUoJuSBxSgq5KHFKC7kkcUoMmTXYhQb8thiFB3yxGIUH7JnMYoQ2bcYxYg8tRhFiTyzGMWJPLcYRYq8sBjFiry0GEWLvLKYqchQyDuFn48NG9rPuYHzcSNcZzDpItxgMEkj3GQwqSPcYjAJJNxmMGkk3GEwySTcZTApJdxjMIklfM1g0ku4z2CSTNhhMKkmPGAwCSc8ZDBpJzxiMMkn7DKYFBQeM5hEFJ4wmHQU9hhMUgr7DCY1hacMJkGFZwwmTYXnDCZZhRcMJmWFlwwmcYVXDLYVPx5tVakm6qcoAyYusU4oaUtsEErSEpuEamU99Tb1NxkTAZ7vCZAe3jqGobe17A0g8BUux5HwptkkHiKEFnhCf++BteSk8NQbQFmMjtRbM3CbY22pv8y1X8xv0x1JnWKHUBKn2CWUtCn2CCVpiteEkjLFPqEkTNEhlHQpDgglWYpDQkmV4ohQEqXoEkqaFMeEkiTFCaGkSNEjlAQp+oSSHsUpoSRHcUYoqVGcE0piFBeEkhbFJaEkRXFFaP3IJcW6D/RHCN88bKmKQKAKoOMW/6o8XCMLpbpOFkp0gyyU5iZZeNhtkYUi2iYLxbNDFopmlywUyx5ZKJLXZKE49slCUXTIQjEckIUiOCQLN/+ILNz0Llm42cdk4SafkIWb2yMLN7VPFm7mKVm4iWdk4eadk4WbdkEWbtYlWbhJV+x+VaVVVVlqy4BvmTQVFx4pKn71S30YxAZd9qaRHGcT6WG5400xpeVQuAURUEXkVEPV7WWtAd3wXiEIulyCRr0EumCCRsUEumSCRs0EumiCRtUEumyCRt0EunCCRuUEunSCRu0EuniCRvUEunyCRv0EuoCCRgUFuoSCRg0FuoiCRhUFuoyCRh0FupCCRiUFupSCRi0FupiCRjUFupyCRj0FuqCCRkUFuqSCRk0FuqiCRlUFuqyCRl0FurCCRmUFurSCRm0FuriCRnUFuryCRn0FusACVmHhJwVMObKYgDdJh1DEd+qlpaEvfS+EFArMNsqOBCp9MFGpx5VtrprOZ/m72XWRzLShE5/yCkkeFRGmPKd//Qbi4E6nO/0aiLoJ5seGb/uGyNiX+EndvYXTsstbdudtg0myIcSPTUQ3qGdirHv3qRp1H2uUyygeQtXyWhv16OseeEzILBj7Qr1/609kpj9BQeGMsPEabG7a1GOsutwfwBCcdsZsaVcggYeObWdM1EKgn6G5jWM/j/0A5vUbNZ0KmHtPveraXV63/9a8znhbzYF0BHtpp9NkT+Y8tzdOzSRna9wg42Jun7u5RAHhvH6S1qQCSXNUVjSKoGi6FtlIJv4ttbRAsx0mi0y/xGQest33kscTNfuP6kmAy+535vwNpv3OvQ088wsagTKa/iX+8Qvc+yJjLXv3NmAjK4lWhhLoeRaPCj9RD6TG06zAAlX4d8J70vn2xRP19o5+d32SmndZRY77L/TbY0+uIY5ZG/tA9Km3jgkQQz5Vv+4w3iFRb7GpKtg4Za3Vi6jZJNQ5UxfFkYRl7V5k3jAD5W4a3UQ5DCN/pfEic1YksXp4P591vn0+byGzFBS32sbJqe73oo3LFZO3MFoLnW+vo3Qk75qhY15RxV3u+ipYeoBnrfBDUK+vpllV0Eu4XfE2xplQy5OpAjAYe5v42TeFL4U3yLKblUXncc5Rrk7nrPgDarwI9QDw7/WyunqsoTonTUO8anep1YrN9O8HWvRRUH31hl8M8tofYJzF2XRQgH+z+O7zpdXm/5u4f3H2YmX1jyuvjl8tfbNe/Z+KzxZ+t/D7ha8WVhf+tPDNwu5Cd+F0IVgIF/668LeFv6/+Y/Wfq/9e/Y9p+uknVZ/fLDg/q//9H5VKZqc=</latexit>
1
=
0.3
= 0.4
<latexit sha1_base64="+AP2peD/OqB1KTx76Z5Ogw9GpbA=">AAAyOXicjVvLbtzIFdVMXhPlNZMA2WRDRDZmJpAFyTaSbAKM3pLVklrvh9tjsNm32bT4Equaktzo+Y1sk+/Il2SZXZBtfiC3qli8t9iUJgIssc4pXharzq17mk338zgScnn5n598+oMf/ujHP/nsp/M/+/kvfvmrz7/49bnIxkUAZ0EWZ8Vl3xcQRymcyUjGcJkX4Cf9GC76N+uKvyihEFGWnsqHHN4lfphGwyjwJULvejF2HfjvX/5leen1+88XlpeW9Y83e7BSHSzMVT/d91+s/LY3yIJxAqkMYl+ItyvLuXw38QsZBTFM53tjAbkf3PghvMXD1E9AvJvoUU+954gMvGFW4L9UehrlZ0z8RIiHpI89E1+ORJNTYBv3diyHf343idJ8LCENzIWG49iTmaemwBtEBQQyfsADPygiHKsXjPzCDyRO1Pz8/HP14x1sXnj7q6c73sbm1u7B7unu4cGJp6n5tpEs4l91H2Kxn0wxhrfvFzeewAvhPAsvG3qBn5tjdcsFDKEoojRUoxpEZSRst2EUjgvAO0rhLsiSxE8Hkx6CMQzldDLpQeJ91cHjr6fTmT4BLgQUtte6brX1K6JwVAc7Vo22XjLLbZ/TLG/r0c+kzBLbaU23ZvpV9+3bbv5jPfq2R/+xHoHtETzWY2B7DFQPXIYdvLtY3aHne9hfrToMMVkGHs5N4sbAYwVO3668wyj9obewooJglC29KGbVUFOw6MXZHRQvAky9pfkehtTTCsOFlYlZwO962JroAG2n43Aj6cdL3haKQUjMGLX2Qq0Y8ibilo241YyoaXmX2WsuvKyuKjzbycM7qhov7Rm3Y39Apyy8Wng9c9pifY49esVDvda3c2JU/eR0oPLN4KsUcOajJYCdEHP2iT37pOXsY3uWzui7rM6ypXpizNWFnpk6Bx+ZmmbAUQHQDMniLbyajUizxmK/mo3tpx7gIqiTW6YMbs09b94uffeVDf3100Gi1Mt9pOQIRCRYnBwDfaUi/Z+BxnkOhadGY4Js1oMxPZwVWPUK/45WrxHsxYsXfplFA28s1AYXDb08EyLCkmRC57GPCVjFf3RhfbUp55iPLTOlGHN61edxfTxyk1Wg9TrQ+vcGwntOQ9A7uelbTbeG6xGh4CxtQ7148ajYcHR+HGZYhEZJy30iZ0ZXd3ryRlmomTtdtaFWW0LZtLHXw5uoYz29pZw6J61+70kzk4rqldWdM/Up1AxXHT21KOb8pnq79fld93x7p/UFcNTq+NEBV4KDKFZijdUBlgXsoI6qeMM4ywpN6yPD68OqA1L9ZLLSrFmywESYTnrKPwR+PNlodij9OBrwDu/NcZFMDDWdCQlCtp+gmWl9R5ALVSlzEcVZWlW5YwyRJV7pF5GP2Wr1DdKfqMj3Ms2KBKM+6yH0bGqns2jQPjF9l+kTE7hMQMzAZQbEgMsAMUOXGRITukxIzMhlRsRELhMR88FlPhBz4zI3xMQuE0+1jIvEiwRmLJr1wYPa7MwKLnofxkJ6gyz9UnrKL6McH9TO4yyMl1SxUzd2SlfNXCYjJneZnJhbl7klpnCZghjhMoIY6TKSmLHLjIkpXaYk5s5l7oi5d5l7Yh5c5oGYjy7zcWrMok0ArO9Zvb2XVZJMTCr1hyxt6nFj/dVZYnvoNuMZx+E+wSw3yoBglhjlgGCWFSUQzFKiHBLM8qEMCWbJUI4IZplQjglmaVB+IJjlQHlDMEuAMiY4ZnBCcMJgNtF8hjOCmZjLnGCm5PKWYCbjsiCYabgUBAu+qATL9jnh0i0JZrot7whmoi3vCWaKLR8IZnItPxJstboZg/rcrT8zFi26BSO61n0ZjPJad2Yw8mvdm8FosHV3BiPE1v0ZjBpbd2gwkmzdo8HosnWXRu7RfRqMQlt3ajAybd2rwWi1uVtbLnG5hHOP7sRgpNu6F4PRb+tuDEbErfsxGCW37shg5Ny6J4PRdOuuDEbYrfsyGHW37sxgJN66N4PReevuDEbsrfszGMU/vkNjLhRRUDuUZJXyY5XSJlkjeI3B6wSvM3iD4A0GbxK8yeAtgrcYvE3wNoN3CN5h8C7Buwx+Q/AbBu8RvMfgDsEdBu8TvM/gA4IPGHxI8CGDuwR3GXxE8BGDjwk+ZvAJwScMPiX4lMFnBJ8x+JzgcwZfEHzB4EuCLxl8RfAVg68Jvn58e3VFB0Z1TKOrTL9aeoxb49y6y61zbsPlNji36XKbnNtyuS3ObbvcNud2XG6Hc7sut8u5Ny73hnN7LrfHuY7LdTi373L7nDtwuQPOHbrcIee6Ltfl3JHLHXHu2OWOOXficiecO3W5U86dudwZ585d7pxzFy53wblLl7vk3JXLXXHu2uWs7M+5hSg/gv4cgZ9dl+tzyyyFif08a7FkbKBeQkWj9sQKd/1wWcEM6RuEfIh2IYiQ+9DeAxHyHGU1EnIa2mcgQv5CuwtEyFVoT4EIeQntJBAhB6H9AyLkG7RrQITcgvYKiMRsHgxCzkD7AkRSNn8GIRegPQAiVPt15UeEKr6u94hQnddVHhHBJtwgVNPLalnYopQGofqtqzciVLV1zUaEarWu1IhQhdb1GZE2N+ra0NKP85Fab/23VmDZr8ShdWFB+qhFTyYqynxThYw5ICJLIFS4/kuwlqSSowUwICL4myARhYk6Vf8l2Aq3Em19I5MJH/9EidW2UKwBtVCoA3ZTEyVQ20KBDqmF4gyphcIcUQuHy8aKgvxALRTjDZubiRJhfecTJUDbwslks4jiy9iUTJTobAtFd0stFFzBZmqihFZP0ESJzLZwotk0o8BKaqG47qiFwrqnForqgVooqI/T6pszrLP3Btc1FnVGtVVXVkSooup6igjVUV1FEaHqqWsnIlQzdcVEhCqlrpOIUH3U1RERqoq6JiJCtVBXQkSoAur6hwjVPV31EKFqp2sdIlTjdIVDhCqbrmuIUD3T1QwRqmK6hiFCtUtXLkSoYul6hQjVKV2lEKHqpGsTIlSTdEVChCqRrkOIUP3R1QcRqjq65iBCtUZXGkSu2QpSXejzspB0R9VG3MMjNns29RXTqdK/vrkqhxV3YvJYq+gUUqG+UN6AIPYLQFGNVtUOhFc0Zk8MI/WoFNIgG0RpiMH8cawQMayPk+lEqKe8JyAfC9DP4sH3henfTzEJm09qU6G/aTR1s4qnn1JXtyaNv0wFU79csxjpX65bjDJAbliMckBuWoyyQG5ZjPJAbluMMkHuWIxyQe5ajLJBvrEY5YPcsxhlhOxYjHJC7luMskIeWIzyQh5ajDJDdi1GuSGPLEbZIY8tRvkhTyxGGSJPLUY5Is8sRlkizy1GeSIvLEaZIi8tRrkiryxG2SKvLWYcGQp5u/DzkWFD+zk3cD5uhGsMJl2E6wwmaYQbDCZ1hJsMJoGEWwwmjYTbDCaZhDsMJqWEuwwmsYRvGEx6CfcYTJIJOwwm1YT7DCbhhAcMJu2Ehwwm+YRdBpOCwiMGk4jCYwaTjsITBpOUwlMGk5rCMwaToMJzBpOmwgsGk6zCSwaTssIrBpO4wmsGW8ePW1tl1UT9FKXPxCXWCCVtiXVCSVpig1CtrOfehv4mYyzA8z0B0sNLxzDwNhe9PgS+wuUoEt5dNo4HCGELPKG/90AvOS489QZQFmMg9dYM3OfoLfWXufaL+S26IqlTbBNK4hQ7hJI2xS6hJE3xhlBSptgjlIQpOoSSLsU+oSRLcUAoqVIcEkqiFF1CSZPiiFCSpDgmlBQpTgglQYpTQkmP4oxQkqM4J5TUKC4IJTGKS0JJi+KKUJKiuCa0fuSSou8D/RHCNw9bKhMI5AA6rvlX9nCVWijVNWqhRNephdLcoBZudpvUQhFtUQvFs00tFM0OtVAsu9RCkbyhFopjj1ooig61UAz71EIRHFALF/+QWrjoXWrhYh9RCxf5mFq4uCfUwkU9pRYu5hm1cBHPqYWLd0EtXLRLauFiXVELF+maXa9yWpXLUksGfMmkcVy4paj81S/1YRIbdNG7i+QoG0sP7Y53hyUth8I1RECOyHFD1eVlrQHdccYIgrZL0PBLoA0TNBwTaMsEDc8E2jRBwzWBtk3Q8E2gjRM0nBNo6wQN7wTaPEHDPYG2T9DwT6ANFDQcFGgLBQ0PBdpEQcNFgbZR0PBRoI0UNJwUaCsFDS8F2kxBw02BtlPQ8FOgDRU0HBVoSwUNTwXaVEHDVYG2VdDwVaCNFTScFWhrBQ1vBdpcQcNdgbZX0PBXoA0WMIeFnxSw5MhiDN44HUARP6iXlga+9L0QUiiw2qh2JFDp/bEqPa5sc9V1OsnfT3pFMtENXfhUVEjyqIiw5Dnn128g9h90udOvgaiLYH1sxLZviIx8iZ/U3Us4Pbu8Z3faNpgkG0D81I3oDvWdmNbMdapO3ac65TKKB1D17OlGPfr6DNwmZBaMfKHev/XHMtOfoKBwRth4DTY3feoxVqfMDmAATj/TbOlXIIGbju1nmqiFQD9DczvHfh77AUzrN2o6FTD1nnvVsTu97vmb07ribTYH0hHspZ1Okz2e8tre2DWTnM1xg4yLqX3u5hIFhNP6SVqTCiTdo2pFwwiKZmiRDWXi31NPCzT7YbHI9EtM5iHbbJQ8Hqu7/6ieBLjsXmfK32Da68ws4Llf0AhUoxlf4h+/wLUvMtbzZGYB1rOSaNVQAr3I4mHhJ+qB1OguK9CgCv9BeM863758pt7e0e+uj1PzLqvIcf2FfnvsWQ/imPWxD0Sfe2tYADHlU/XrAfMdEvUWm3LBJijrrV5EzcahrpnaFEcSFnV4kXmDDFS4u+gmymEQ+UuNF5mzIonVw/vppPPt8rSFzFJQ3EobJ+/0eS/buFwxeQujtdD5thelQ/nQTB3ziiquctdXyXICuNcKPwT1+mqaVYZewv2Stz7KhJqeTBnAYORt4GffFL4UXj/Lbpbmncc5h7nanbPiD6jxItQDwL+9RXX0VEe1T5qOeNQeUqsVu+nfj/Q4RUGdqjf8YpA9v495Fmd3/QL8m/n3ny+sNP/fxOzB+cullT8uvT56vfDNWvV/Kj6b+93c7+e+mluZ+9PcN3M7c925s7lg7nbur3N/m/v7yj9W/rXy75X/mK6fflKd85s552flv/8DcTVqTg==</latexit>
sha1_base64="+AP2peD/OqB1KTx76Z5Ogw9GpbA=">AAAyOXicjVvLbtzIFdVMXhPlNZMA2WRDRDZmJpAFyTaSbAKM3pLVklrvh9tjsNm32bT4Equaktzo+Y1sk+/Il2SZXZBtfiC3qli8t9iUJgIssc4pXharzq17mk338zgScnn5n598+oMf/ujHP/nsp/M/+/kvfvmrz7/49bnIxkUAZ0EWZ8Vl3xcQRymcyUjGcJkX4Cf9GC76N+uKvyihEFGWnsqHHN4lfphGwyjwJULvejF2HfjvX/5leen1+88XlpeW9Y83e7BSHSzMVT/d91+s/LY3yIJxAqkMYl+ItyvLuXw38QsZBTFM53tjAbkf3PghvMXD1E9AvJvoUU+954gMvGFW4L9UehrlZ0z8RIiHpI89E1+ORJNTYBv3diyHf343idJ8LCENzIWG49iTmaemwBtEBQQyfsADPygiHKsXjPzCDyRO1Pz8/HP14x1sXnj7q6c73sbm1u7B7unu4cGJp6n5tpEs4l91H2Kxn0wxhrfvFzeewAvhPAsvG3qBn5tjdcsFDKEoojRUoxpEZSRst2EUjgvAO0rhLsiSxE8Hkx6CMQzldDLpQeJ91cHjr6fTmT4BLgQUtte6brX1K6JwVAc7Vo22XjLLbZ/TLG/r0c+kzBLbaU23ZvpV9+3bbv5jPfq2R/+xHoHtETzWY2B7DFQPXIYdvLtY3aHne9hfrToMMVkGHs5N4sbAYwVO3668wyj9obewooJglC29KGbVUFOw6MXZHRQvAky9pfkehtTTCsOFlYlZwO962JroAG2n43Aj6cdL3haKQUjMGLX2Qq0Y8ibilo241YyoaXmX2WsuvKyuKjzbycM7qhov7Rm3Y39Apyy8Wng9c9pifY49esVDvda3c2JU/eR0oPLN4KsUcOajJYCdEHP2iT37pOXsY3uWzui7rM6ypXpizNWFnpk6Bx+ZmmbAUQHQDMniLbyajUizxmK/mo3tpx7gIqiTW6YMbs09b94uffeVDf3100Gi1Mt9pOQIRCRYnBwDfaUi/Z+BxnkOhadGY4Js1oMxPZwVWPUK/45WrxHsxYsXfplFA28s1AYXDb08EyLCkmRC57GPCVjFf3RhfbUp55iPLTOlGHN61edxfTxyk1Wg9TrQ+vcGwntOQ9A7uelbTbeG6xGh4CxtQ7148ajYcHR+HGZYhEZJy30iZ0ZXd3ryRlmomTtdtaFWW0LZtLHXw5uoYz29pZw6J61+70kzk4rqldWdM/Up1AxXHT21KOb8pnq79fld93x7p/UFcNTq+NEBV4KDKFZijdUBlgXsoI6qeMM4ywpN6yPD68OqA1L9ZLLSrFmywESYTnrKPwR+PNlodij9OBrwDu/NcZFMDDWdCQlCtp+gmWl9R5ALVSlzEcVZWlW5YwyRJV7pF5GP2Wr1DdKfqMj3Ms2KBKM+6yH0bGqns2jQPjF9l+kTE7hMQMzAZQbEgMsAMUOXGRITukxIzMhlRsRELhMR88FlPhBz4zI3xMQuE0+1jIvEiwRmLJr1wYPa7MwKLnofxkJ6gyz9UnrKL6McH9TO4yyMl1SxUzd2SlfNXCYjJneZnJhbl7klpnCZghjhMoIY6TKSmLHLjIkpXaYk5s5l7oi5d5l7Yh5c5oGYjy7zcWrMok0ArO9Zvb2XVZJMTCr1hyxt6nFj/dVZYnvoNuMZx+E+wSw3yoBglhjlgGCWFSUQzFKiHBLM8qEMCWbJUI4IZplQjglmaVB+IJjlQHlDMEuAMiY4ZnBCcMJgNtF8hjOCmZjLnGCm5PKWYCbjsiCYabgUBAu+qATL9jnh0i0JZrot7whmoi3vCWaKLR8IZnItPxJstboZg/rcrT8zFi26BSO61n0ZjPJad2Yw8mvdm8FosHV3BiPE1v0ZjBpbd2gwkmzdo8HosnWXRu7RfRqMQlt3ajAybd2rwWi1uVtbLnG5hHOP7sRgpNu6F4PRb+tuDEbErfsxGCW37shg5Ny6J4PRdOuuDEbYrfsyGHW37sxgJN66N4PReevuDEbsrfszGMU/vkNjLhRRUDuUZJXyY5XSJlkjeI3B6wSvM3iD4A0GbxK8yeAtgrcYvE3wNoN3CN5h8C7Buwx+Q/AbBu8RvMfgDsEdBu8TvM/gA4IPGHxI8CGDuwR3GXxE8BGDjwk+ZvAJwScMPiX4lMFnBJ8x+JzgcwZfEHzB4EuCLxl8RfAVg68Jvn58e3VFB0Z1TKOrTL9aeoxb49y6y61zbsPlNji36XKbnNtyuS3ObbvcNud2XG6Hc7sut8u5Ny73hnN7LrfHuY7LdTi373L7nDtwuQPOHbrcIee6Ltfl3JHLHXHu2OWOOXficiecO3W5U86dudwZ585d7pxzFy53wblLl7vk3JXLXXHu2uWs7M+5hSg/gv4cgZ9dl+tzyyyFif08a7FkbKBeQkWj9sQKd/1wWcEM6RuEfIh2IYiQ+9DeAxHyHGU1EnIa2mcgQv5CuwtEyFVoT4EIeQntJBAhB6H9AyLkG7RrQITcgvYKiMRsHgxCzkD7AkRSNn8GIRegPQAiVPt15UeEKr6u94hQnddVHhHBJtwgVNPLalnYopQGofqtqzciVLV1zUaEarWu1IhQhdb1GZE2N+ra0NKP85Fab/23VmDZr8ShdWFB+qhFTyYqynxThYw5ICJLIFS4/kuwlqSSowUwICL4myARhYk6Vf8l2Aq3Em19I5MJH/9EidW2UKwBtVCoA3ZTEyVQ20KBDqmF4gyphcIcUQuHy8aKgvxALRTjDZubiRJhfecTJUDbwslks4jiy9iUTJTobAtFd0stFFzBZmqihFZP0ESJzLZwotk0o8BKaqG47qiFwrqnForqgVooqI/T6pszrLP3Btc1FnVGtVVXVkSooup6igjVUV1FEaHqqWsnIlQzdcVEhCqlrpOIUH3U1RERqoq6JiJCtVBXQkSoAur6hwjVPV31EKFqp2sdIlTjdIVDhCqbrmuIUD3T1QwRqmK6hiFCtUtXLkSoYul6hQjVKV2lEKHqpGsTIlSTdEVChCqRrkOIUP3R1QcRqjq65iBCtUZXGkSu2QpSXejzspB0R9VG3MMjNns29RXTqdK/vrkqhxV3YvJYq+gUUqG+UN6AIPYLQFGNVtUOhFc0Zk8MI/WoFNIgG0RpiMH8cawQMayPk+lEqKe8JyAfC9DP4sH3henfTzEJm09qU6G/aTR1s4qnn1JXtyaNv0wFU79csxjpX65bjDJAbliMckBuWoyyQG5ZjPJAbluMMkHuWIxyQe5ajLJBvrEY5YPcsxhlhOxYjHJC7luMskIeWIzyQh5ajDJDdi1GuSGPLEbZIY8tRvkhTyxGGSJPLUY5Is8sRlkizy1GeSIvLEaZIi8tRrkiryxG2SKvLWYcGQp5u/DzkWFD+zk3cD5uhGsMJl2E6wwmaYQbDCZ1hJsMJoGEWwwmjYTbDCaZhDsMJqWEuwwmsYRvGEx6CfcYTJIJOwwm1YT7DCbhhAcMJu2Ehwwm+YRdBpOCwiMGk4jCYwaTjsITBpOUwlMGk5rCMwaToMJzBpOmwgsGk6zCSwaTssIrBpO4wmsGW8ePW1tl1UT9FKXPxCXWCCVtiXVCSVpig1CtrOfehv4mYyzA8z0B0sNLxzDwNhe9PgS+wuUoEt5dNo4HCGELPKG/90AvOS489QZQFmMg9dYM3OfoLfWXufaL+S26IqlTbBNK4hQ7hJI2xS6hJE3xhlBSptgjlIQpOoSSLsU+oSRLcUAoqVIcEkqiFF1CSZPiiFCSpDgmlBQpTgglQYpTQkmP4oxQkqM4J5TUKC4IJTGKS0JJi+KKUJKiuCa0fuSSou8D/RHCNw9bKhMI5AA6rvlX9nCVWijVNWqhRNephdLcoBZudpvUQhFtUQvFs00tFM0OtVAsu9RCkbyhFopjj1ooig61UAz71EIRHFALF/+QWrjoXWrhYh9RCxf5mFq4uCfUwkU9pRYu5hm1cBHPqYWLd0EtXLRLauFiXVELF+maXa9yWpXLUksGfMmkcVy4paj81S/1YRIbdNG7i+QoG0sP7Y53hyUth8I1RECOyHFD1eVlrQHdccYIgrZL0PBLoA0TNBwTaMsEDc8E2jRBwzWBtk3Q8E2gjRM0nBNo6wQN7wTaPEHDPYG2T9DwT6ANFDQcFGgLBQ0PBdpEQcNFgbZR0PBRoI0UNJwUaCsFDS8F2kxBw02BtlPQ8FOgDRU0HBVoSwUNTwXaVEHDVYG2VdDwVaCNFTScFWhrBQ1vBdpcQcNdgbZX0PBXoA0WMIeFnxSw5MhiDN44HUARP6iXlga+9L0QUiiw2qh2JFDp/bEqPa5sc9V1OsnfT3pFMtENXfhUVEjyqIiw5Dnn128g9h90udOvgaiLYH1sxLZviIx8iZ/U3Us4Pbu8Z3faNpgkG0D81I3oDvWdmNbMdapO3ac65TKKB1D17OlGPfr6DNwmZBaMfKHev/XHMtOfoKBwRth4DTY3feoxVqfMDmAATj/TbOlXIIGbju1nmqiFQD9DczvHfh77AUzrN2o6FTD1nnvVsTu97vmb07ribTYH0hHspZ1Okz2e8tre2DWTnM1xg4yLqX3u5hIFhNP6SVqTCiTdo2pFwwiKZmiRDWXi31NPCzT7YbHI9EtM5iHbbJQ8Hqu7/6ieBLjsXmfK32Da68ws4Llf0AhUoxlf4h+/wLUvMtbzZGYB1rOSaNVQAr3I4mHhJ+qB1OguK9CgCv9BeM863758pt7e0e+uj1PzLqvIcf2FfnvsWQ/imPWxD0Sfe2tYADHlU/XrAfMdEvUWm3LBJijrrV5EzcahrpnaFEcSFnV4kXmDDFS4u+gmymEQ+UuNF5mzIonVw/vppPPt8rSFzFJQ3EobJ+/0eS/buFwxeQujtdD5thelQ/nQTB3ziiquctdXyXICuNcKPwT1+mqaVYZewv2Stz7KhJqeTBnAYORt4GffFL4UXj/Lbpbmncc5h7nanbPiD6jxItQDwL+9RXX0VEe1T5qOeNQeUqsVu+nfj/Q4RUGdqjf8YpA9v495Fmd3/QL8m/n3ny+sNP/fxOzB+cullT8uvT56vfDNWvV/Kj6b+93c7+e+mluZ+9PcN3M7c925s7lg7nbur3N/m/v7yj9W/rXy75X/mK6fflKd85s552flv/8DcTVqTg==</latexit>
2 =
0.4
time series of a specific length, l ∼ U{lmin , lmax }, from the
k=3
<latexit sha1_base64="oomw4dJbq3fET7MQJEcRMzy2mjw=">AAAyL3icjVvbbtzIEdVubhvltpsAeckLEdnY3UAWLNtI8hJgdZeskTTSjK6W1+Bwaji0eDO7hyN5MPsHeU2+I18T5CXIa/4i1d1sVjVFaSPAEuuc7mJfTnXVcOhBHkdCPn/+r08+/cEPf/Tjn3z208Wf/fwXv/zV51/8+kxkkyKA0yCLs+Ji4AuIoxROZSRjuMgL8JNBDOeDmw3Fn5dQiChL+/Iuh7eJH6bRKAp8iVDv5i8v332+9Hzluf7x7l+sVhdLC9VP990Xq7+9HmbBJIFUBrEvxJvV57l8O/MLGQUxzBevJwJyP7jxQ3iDl6mfgHg702Ode08RGXqjrMB/qfQ0ynvM/ESIu2SALRNfjkWTU2Ab92YiR39+O4vSfCIhDcyNRpPYk5mnJu4NowICGd/hhR8UEY7VC8Z+4QcSl2dxcfGp+vEOt869g7X+rre5tb13uNffOzrseZpabBvJMv5V8xDLg2SOPrwDv7jxBN4IV1d42cgL/NxcqykXMIKiiNJQjWoYlZGwzUZROCkAZ5TCNMiSxE+Hs2sEYxjJ+Wx2DYn3VQevv57P77UJcCOgsK02tNXWrojCce3sRBltrWSW2zb9LG9rMcikzBLbaF1b99pV8/ZtM/+hFgPbYvBQi8C2CB5qMbQthqoFbsMuzi5WM/R8D9urXYcRhsjQw7VJXB94rcD5m9W36GUw8pZWlRP0sq03xewaagqWvTibQvEswIBbWbxGl3pZYbS0OjMb+N01WjPtoK07DjeSfrzibaMYhMSIUXsv1I4hbzxuW4/bTY+altPM3nPpRXVX4dlGHs6oMl7YHh8m/pC6LL1cenWv23Ldx1695K5e6en0jKofXQ5Uvhl8FQLOerQ4sAtievds715L7xPbS0f0NKujbKVeGHN3oVemjsEHlqbpcFwANF0yf0sv73ukVWO+X9737ace4Caozi1LBh/MnLc+rHz3lXX99eNOotTLfaTkGEQkmJ8cHX2lPP2fjiZ5DoWnRmOcbNWDMS2cHVjzCn9Ku9dw9uzZM7/MoqE3EeqAi0ZengkRYSIyrvPYxwCs/D+4sb46lHOMx5aVUozpXrV5WB8PTLJytFE72vheRzjnNAR9kpu21XJruB4RCs7S1tWzZw+KDUfnx2GGSWictMwTOTO6utGjE2Wu7s10zbpaa3Flw8beDydR+3r8SOk7nda+t9O9RUX1ymrmTH0KNcNVV49tiunfVG+37t91+9uZ1jfAUavrBwdcCQ6iWIk1VheYFrCBuqr8jeIsKzStrwyvL6sGSA2S2WozZ8kCA2E+u1b1Q+DHs81mg9KPoyFv8M5cF8nMUPN7LkHI9g6amdczglyoTJmLKM7SKsudoIss8Uq/iHyMVqtvkP5Meb6VaVYk6PXJNUJP5nY5iwbtEzNwmQExgcsExAxdZkgMuAwQM3KZETGhy4TEjF1mTEzkMhEx713mPTE3LnNDTOwy8VzLuEi8SGDEYok+vFOHndnBZe/9REhvmKVfSk/VyyjHO3XyOBvjJZXv1PWd0l0zl8mIyV0mJ+aDy3wgpnCZghjhMoIY6TKSmInLTIgpXaYkZuoyU2JuXeaWmDuXuSPmo8t8nJti0QYA5vesPt7LKkhmJpQGIxY29bgx/+oosS20zXjGcXhAMIuNMiCYBUY5JJhFRQkEs5AoRwSzeChDglkwlGOCWSSUE4JZGJTvCWYxUN4QzAKgjAmOGZwQnDCYLTRf4YxgJuYyJ5gpufxAMJNxWRDMNFwKggXfVIJl+5pw6ZYEM92WU4KZaMtbgpliyzuCmVzLjwRbrW7FoD5368+MRYtuwYiu9VwGo7zWkxmM/FrPZjAabD2dwQix9XwGo8bWExqMJFvPaDC6bD2lkXvwnAaj0NaTGoxMW89qMFptntaWS1wu4dyDJzEY6baexWD023oagxFx63kMRsmtJzIYObeeyWA03XoqgxF267kMRt2tJzMYibeezWB03no6gxF76/kMRvEPn9AYC0UU1BVKskbxsUZhk6wTvM7gDYI3GLxJ8CaDtwjeYvA2wdsM3iF4h8G7BO8yeI/gPQa/Jvg1g/cJ3mdwh+AOgw8IPmDwIcGHDD4i+IjBXYK7DD4m+JjBJwSfMLhHcI/BfYL7DD4l+JTBZwSfMfic4HMGXxB8weBLgi8ZfEXw1cPHqys6MKpjGl1j+tXSY9w65zZcboNzmy63ybktl9vi3LbLbXNux+V2OLfrcruc23O5Pc69drnXnNt3uX3OdVyuw7kDlzvg3KHLHXLuyOWOONd1uS7njl3umHMnLnfCuZ7L9TjXd7k+505d7pRzZy53xrlzlzvn3IXLXXDu0uUuOXflclb2Z7yEKD+C/hyBn12f133LLIWZ/TxrsWRioOuEkkZdEyvcrYfLCmbIwCBUh+gqBBGqPnTtgQjVHGU1Eqo0dJ2BCNUXurpAhKoKXVMgQrWEriQQoQpC1w+IUN2gqwZEqFrQtQIiMVsHg1BloOsCRFK2fgahKkDXAIhQ7teZHxHK+DrfI0J5Xmd5RARbcINQTi+rbWGbUhqE8rfO3ohQ1tY5GxHK1TpTI0IZWudnRNqqUbcMLf04H6v91n9rBZaDShxaFxakj1r0ZKKiYj8ZDFUPc0FElkCocP2XYC1JJUcLoENE8DdBIgoT1VX/JdgKtxJtPZHZjI9/psRqLRRrQBYKdcgmNVMCtRYKdEQWijMkC4U5JguHy8aKgnxPForxhq3NTImwnvlMCdBauJhsFVF8GVuSmRKdtVB0H8hCwRVspWZKaPUCzZTIrIULzZYZBVaSheKakoXCuiULRXVHFgrq47z65gzz7K3BdY5FnVFu1ZkVEcqoOp8iQnlUZ1FEKHvq3IkI5UydMRGhTKnzJCKUH3V2RISyos6JiFAu1JkQEcqAOv8hQnlPZz1EKNvpXIcI5Tid4RChzKbzGiKUz3Q2Q4SymM5hiFDu0pkLEcpYOl8hQnlKZylEKDvp3IQI5SSdkRChTKTzECKUf3T2QYSyjs45iFCu0ZkGkSu2g5QXBjwtJN1xdRBf4xVbPRv6iulU4V9ProphxfVMHGsV9SEV6gvlTQhivwAU1XhNnUB4R1PsiVGkHpVCGmTDKA3RmT+JFSJG9XUynwn1lLcH8iEHgywefp+bwe0cg7D5pDYV+ptGkzcrf/opdTU1aerLVDD1y3WLkf7lhsUoAuSmxSgG5JbFKArktsUoDuSOxSgS5K7FKBbknsUoGuRri1E8yH2LUUTIjsUoJuSBxSgq5KHFKC7kkcUoMmTXYhQb8thiFB3yxGIUH7JnMYoQ2bcYxYg8tRhFiTyzGMWJPLcYRYq8sBjFiry0GEWLvLKYqchQyDuFn48NG9rPuYHzcSNcZzDpItxgMEkj3GQwqSPcYjAJJNxmMGkk3GEwySTcZTApJdxjMIklfM1g0ku4z2CSTNhhMKkmPGAwCSc8ZDBpJzxiMMkn7DKYFBQeM5hEFJ4wmHQU9hhMUgr7DCY1hacMJkGFZwwmTYXnDCZZhRcMJmWFlwwmcYVXDLYVPx5tVakm6qcoAyYusU4oaUtsEErSEpuEamU99Tb1NxkTAZ7vCZAe3jqGobe17A0g8BUux5HwptkkHiKEFnhCf++BteSk8NQbQFmMjtRbM3CbY22pv8y1X8xv0x1JnWKHUBKn2CWUtCn2CCVpiteEkjLFPqEkTNEhlHQpDgglWYpDQkmV4ohQEqXoEkqaFMeEkiTFCaGkSNEjlAQp+oSSHsUpoSRHcUYoqVGcE0piFBeEkhbFJaEkRXFFaP3IJcW6D/RHCN88bKmKQKAKoOMW/6o8XCMLpbpOFkp0gyyU5iZZeNhtkYUi2iYLxbNDFopmlywUyx5ZKJLXZKE49slCUXTIQjEckIUiOCQLN/+ILNz0Llm42cdk4SafkIWb2yMLN7VPFm7mKVm4iWdk4eadk4WbdkEWbtYlWbhJV+x+VaVVVVlqy4BvmTQVFx4pKn71S30YxAZd9qaRHGcT6WG5400xpeVQuAURUEXkVEPV7WWtAd3wXiEIulyCRr0EumCCRsUEumSCRs0EumiCRtUEumyCRt0EunCCRuUEunSCRu0EuniCRvUEunyCRv0EuoCCRgUFuoSCRg0FuoiCRhUFuoyCRh0FupCCRiUFupSCRi0FupiCRjUFupyCRj0FuqCCRkUFuqSCRk0FuqiCRlUFuqyCRl0FurCCRmUFurSCRm0FuriCRnUFuryCRn0FusACVmHhJwVMObKYgDdJh1DEd+qlpaEvfS+EFArMNsqOBCp9MFGpx5VtrprOZ/m72XWRzLShE5/yCkkeFRGmPKd//Qbi4E6nO/0aiLoJ5seGb/uGyNiX+EndvYXTsstbdudtg0myIcSPTUQ3qGdirHv3qRp1H2uUyygeQtXyWhv16OseeEzILBj7Qr1/609kpj9BQeGMsPEabG7a1GOsutwfwBCcdsZsaVcggYeObWdM1EKgn6G5jWM/j/0A5vUbNZ0KmHtPveraXV63/9a8znhbzYF0BHtpp9NkT+Y8tzdOzSRna9wg42Jun7u5RAHhvH6S1qQCSXNUVjSKoGi6FtlIJv4ttbRAsx0mi0y/xGQest33kscTNfuP6kmAy+535vwNpv3OvQ088wsagTKa/iX+8Qvc+yJjLXv3NmAjK4lWhhLoeRaPCj9RD6TG06zAAlX4d8J70vn2xRP19o5+d32SmndZRY77L/TbY0+uIY5ZG/tA9Km3jgkQQz5Vv+4w3iFRb7GpKtg4Za3Vi6jZJNQ5UxfFkYRl7V5k3jAD5W4a3UQ5DCN/pfEic1YksXp4P591vn0+byGzFBS32sbJqe73oo3LFZO3MFoLnW+vo3Qk75qhY15RxV3u+ipYeoBnrfBDUK+vpllV0Eu4XfE2xplQy5OpAjAYe5v42TeFL4U3yLKblUXncc5Rrk7nrPgDarwI9QDw7/WyunqsoTonTUO8anep1YrN9O8HWvRRUH31hl8M8tofYJzF2XRQgH+z+O7zpdXm/5u4f3H2YmX1jyuvjl8tfbNe/Z+KzxZ+t/D7ha8WVhf+tPDNwu5Cd+F0IVgIF/668LeFv6/+Y/Wfq/9e/Y9p+uknVZ/fLDg/q//9H8cTZqg=</latexit>
training datasets, scales them, and takes their convex combi- <latexit sha1_base64="8+XFuh9fvoeSyvfde59TOZyBrH8=">AAAyOXicjVvLbtzIFdVMXhPlNZMA2WRDRDZmJpAFyTaSbAKM3pLVklrvh9tjsNm32bT4Equaktzo+Y1sk+/Il2SZXZBtfiC3qli8t9iUJgIssc4pXharzq17mk338zgScnn5n598+oMf/ujHP/nsp/M/+/kvfvmrz7/49bnIxkUAZ0EWZ8Vl3xcQRymcyUjGcJkX4Cf9GC76N+uKvyihEFGWnsqHHN4lfphGwyjwJULvejF2HfjvX/1leenV+88XlpeW9Y83e7BSHSzMVT/d91+s/LY3yIJxAqkMYl+ItyvLuXw38QsZBTFM53tjAbkf3PghvMXD1E9AvJvoUU+954gMvGFW4L9UehrlZ0z8RIiHpI89E1+ORJNTYBv3diyHf343idJ8LCENzIWG49iTmaemwBtEBQQyfsADPygiHKsXjPzCDyRO1Pz8/HP14x1sXnj7q6c73sbm1u7B7unu4cGJp6n5tpEs4l91H2Kxn0wxhrfvFzeewAvhPAsvG3qBn5tjdcsFDKEoojRUoxpEZSRst2EUjgvAO0rhLsiSxE8Hkx6CMQzldDLpQeJ91cHjr6fTmT4BLgQUtte6brX1K6JwVAc7Vo22XjLLbZ/TLG/r0c+kzBLbaU23ZvpV9+3bbv5jPfq2R/+xHoHtETzWY2B7DFQPXIYdvLtY3aHne9hfrToMMVkGHs5N4sbAYwVO3668wyj9obewooJglC29KGbVUFOw6MXZHRQvAky9pfkehtTTCsOFlYlZwO962JroAG2n43Aj6cdL3haKQUjMGLX2Qq0Y8ibilo241YyoaXmX2WsuvKyuKjzbycM7qhov7Rm3Y39Apyy8Wng9c9pifY49esVDvda3c2JU/eR0oPLN4KsUcOajJYCdEHP2iT37pOXsY3uWzui7rM6ypXpizNWFnpk6Bx+ZmmbAUQHQDMniLbyajUizxmK/mo3tpx7gIqiTW6YMbs09b94uffeVDf3100Gi1Mt9pOQIRCRYnBwDfaUi/Z+BxnkOhadGY4Js1oMxPZwVWPUK/45WrxHsxYsXfplFA28s1AYXDb08EyLCkmRC57GPCVjFf3RhfbUp55iPLTOlGHN61edxfTxyk1Wg9TrQ+vcGwntOQ9A7uelbTbeG6xGh4CxtQ7148ajYcHR+HGZYhEZJy30iZ0ZXd3ryRlmomTtdtaFWW0LZtLHXw5uoYz29pZw6J61+70kzk4rqldWdM/Up1AxXHT21KOb8pnq79fld93x7p/UFcNTq+NEBV4KDKFZijdUBlgXsoI6qeMM4ywpN6yPD68OqA1L9ZLLSrFmywESYTnrKPwR+PNlodij9OBrwDu/NcZFMDDWdCQlCtp+gmWl9R5ALVSlzEcVZWlW5YwyRJV7pF5GP2Wr1DdKfqMj3Ms2KBKM+6yH0bGqns2jQPjF9l+kTE7hMQMzAZQbEgMsAMUOXGRITukxIzMhlRsRELhMR88FlPhBz4zI3xMQuE0+1jIvEiwRmLJr1wYPa7MwKLnofxkJ6gyz9UnrKL6McH9TO4yyMl1SxUzd2SlfNXCYjJneZnJhbl7klpnCZghjhMoIY6TKSmLHLjIkpXaYk5s5l7oi5d5l7Yh5c5oGYjy7zcWrMok0ArO9Zvb2XVZJMTCr1hyxt6nFj/dVZYnvoNuMZx+E+wSw3yoBglhjlgGCWFSUQzFKiHBLM8qEMCWbJUI4IZplQjglmaVB+IJjlQHlDMEuAMiY4ZnBCcMJgNtF8hjOCmZjLnGCm5PKWYCbjsiCYabgUBAu+qATL9jnh0i0JZrot7whmoi3vCWaKLR8IZnItPxJstboZg/rcrT8zFi26BSO61n0ZjPJad2Yw8mvdm8FosHV3BiPE1v0ZjBpbd2gwkmzdo8HosnWXRu7RfRqMQlt3ajAybd2rwWi1uVtbLnG5hHOP7sRgpNu6F4PRb+tuDEbErfsxGCW37shg5Ny6J4PRdOuuDEbYrfsyGHW37sxgJN66N4PReevuDEbsrfszGMU/vkNjLhRRUDuUZJXyY5XSJlkjeI3B6wSvM3iD4A0GbxK8yeAtgrcYvE3wNoN3CN5h8C7Buwx+Q/AbBu8RvMfgDsEdBu8TvM/gA4IPGHxI8CGDuwR3GXxE8BGDjwk+ZvAJwScMPiX4lMFnBJ8x+JzgcwZfEHzB4EuCLxl8RfAVg68Jvn58e3VFB0Z1TKOrTL9aeoxb49y6y61zbsPlNji36XKbnNtyuS3ObbvcNud2XG6Hc7sut8u5Ny73hnN7LrfHuY7LdTi373L7nDtwuQPOHbrcIee6Ltfl3JHLHXHu2OWOOXficiecO3W5U86dudwZ585d7pxzFy53wblLl7vk3JXLXXHu2uWs7M+5hSg/gv4cgZ9dl+tzyyyFif08a7FkbKBeQkWj9sQKd/1wWcEM6RuEfIh2IYiQ+9DeAxHyHGU1EnIa2mcgQv5CuwtEyFVoT4EIeQntJBAhB6H9AyLkG7RrQITcgvYKiMRsHgxCzkD7AkRSNn8GIRegPQAiVPt15UeEKr6u94hQnddVHhHBJtwgVNPLalnYopQGofqtqzciVLV1zUaEarWu1IhQhdb1GZE2N+ra0NKP85Fab/23VmDZr8ShdWFB+qhFTyYqynxThYw5ICJLIFS4/kuwlqSSowUwICL4myARhYk6Vf8l2Aq3Em19I5MJH/9EidW2UKwBtVCoA3ZTEyVQ20KBDqmF4gyphcIcUQuHy8aKgvxALRTjDZubiRJhfecTJUDbwslks4jiy9iUTJTobAtFd0stFFzBZmqihFZP0ESJzLZwotk0o8BKaqG47qiFwrqnForqgVooqI/T6pszrLP3Btc1FnVGtVVXVkSooup6igjVUV1FEaHqqWsnIlQzdcVEhCqlrpOIUH3U1RERqoq6JiJCtVBXQkSoAur6hwjVPV31EKFqp2sdIlTjdIVDhCqbrmuIUD3T1QwRqmK6hiFCtUtXLkSoYul6hQjVKV2lEKHqpGsTIlSTdEVChCqRrkOIUP3R1QcRqjq65iBCtUZXGkSu2QpSXejzspB0R9VG3MMjNns29RXTqdK/vrkqhxV3YvJYq+gUUqG+UN6AIPYLQFGNVtUOhFc0Zk8MI/WoFNIgG0RpiMH8cawQMayPk+lEqKe8JyAfC9DP4sH3henfTzEJm09qU6G/aTR1s4qnn1JXtyaNv0wFU79csxjpX65bjDJAbliMckBuWoyyQG5ZjPJAbluMMkHuWIxyQe5ajLJBvrEY5YPcsxhlhOxYjHJC7luMskIeWIzyQh5ajDJDdi1GuSGPLEbZIY8tRvkhTyxGGSJPLUY5Is8sRlkizy1GeSIvLEaZIi8tRrkiryxG2SKvLWYcGQp5u/DzkWFD+zk3cD5uhGsMJl2E6wwmaYQbDCZ1hJsMJoGEWwwmjYTbDCaZhDsMJqWEuwwmsYRvGEx6CfcYTJIJOwwm1YT7DCbhhAcMJu2Ehwwm+YRdBpOCwiMGk4jCYwaTjsITBpOUwlMGk5rCMwaToMJzBpOmwgsGk6zCSwaTssIrBpO4wmsGW8ePW1tl1UT9FKXPxCXWCCVtiXVCSVpig1CtrOfehv4mYyzA8z0B0sNLxzDwNhe9PgS+wuUoEt5dNo4HCGELPKG/90AvOS489QZQFmMg9dYM3OfoLfWXufaL+S26IqlTbBNK4hQ7hJI2xS6hJE3xhlBSptgjlIQpOoSSLsU+oSRLcUAoqVIcEkqiFF1CSZPiiFCSpDgmlBQpTgglQYpTQkmP4oxQkqM4J5TUKC4IJTGKS0JJi+KKUJKiuCa0fuSSou8D/RHCNw9bKhMI5AA6rvlX9nCVWijVNWqhRNephdLcoBZudpvUQhFtUQvFs00tFM0OtVAsu9RCkbyhFopjj1ooig61UAz71EIRHFALF/+QWrjoXWrhYh9RCxf5mFq4uCfUwkU9pRYu5hm1cBHPqYWLd0EtXLRLauFiXVELF+maXa9yWpXLUksGfMmkcVy4paj81S/1YRIbdNG7i+QoG0sP7Y53hyUth8I1RECOyHFD1eVlrQHdccYIgrZL0PBLoA0TNBwTaMsEDc8E2jRBwzWBtk3Q8E2gjRM0nBNo6wQN7wTaPEHDPYG2T9DwT6ANFDQcFGgLBQ0PBdpEQcNFgbZR0PBRoI0UNJwUaCsFDS8F2kxBw02BtlPQ8FOgDRU0HBVoSwUNTwXaVEHDVYG2VdDwVaCNFTScFWhrBQ1vBdpcQcNdgbZX0PBXoA0WMIeFnxSw5MhiDN44HUARP6iXlga+9L0QUiiw2qh2JFDp/bEqPa5sc9V1OsnfT3pFMtENXfhUVEjyqIiw5Dnn128g9h90udOvgaiLYH1sxLZviIx8iZ/U3Us4Pbu8Z3faNpgkG0D81I3oDvWdmNbMdapO3ac65TKKB1D17OlGPfr6DNwmZBaMfKHev/XHMtOfoKBwRth4DTY3feoxVqfMDmAATj/TbOlXIIGbju1nmqiFQD9DczvHfh77AUzrN2o6FTD1nnvVsTu97vmb07ribTYH0hHspZ1Okz2e8tre2DWTnM1xg4yLqX3u5hIFhNP6SVqTCiTdo2pFwwiKZmiRDWXi31NPCzT7YbHI9EtM5iHbbJQ8Hqu7/6ieBLjsXmfK32Da68ws4Llf0AhUoxlf4h+/wLUvMtbzZGYB1rOSaNVQAr3I4mHhJ+qB1OguK9CgCv9BeM863758pt7e0e+uj1PzLqvIcf2FfnvsWQ/imPWxD0Sfe2tYADHlU/XrAfMdEvUWm3LBJijrrV5EzcahrpnaFEcSFnV4kXmDDFS4u+gmymEQ+UuNF5mzIonVw/vppPPt8rSFzFJQ3EobJ+/0eS/buFwxeQujtdD5thelQ/nQTB3ziiquctdXyXICuNcKPwT1+mqaVYZewv2Stz7KhJqeTBnAYORt4GffFL4UXj/Lbpbmncc5h7nanbPiD6jxItQDwL+9RXX0VEe1T5qOeNQeUqsVu+nfj/Q4RUGdqjf8YpA9v495Fmd3/QL8m/n3ny+sNP/fxOzB+cullT8uvT56vfDNWvV/Kj6b+93c7+e+mluZ+9PcN3M7c925s7lg7nbur3N/m/v7yj9W/rXy75X/mK6fflKd85s552flv/8DcTlqTg==</latexit>
= 0.3
nation,
3
k
(i)
X
x̃TSMix
1:l = λi x̃1:l , (3)
i=1 Figure 2: An illustration of TSMix augmenta-
tion for k = {1, 2, 3}. TSMix improves pat-
(i)
where x̃1:l denotes the i-th scaled time series. The combination tern diversity by taking weighted combinations
weights, [λ1 , . . . , λk ], are sampled from a symmetric Dirichlet of randomly-sampled time series from different
distribution, Dir(α). The complete pseudocode of TSMix can datasets.
be found in Algorithm 1 in Appendix A. Intuitively, TSMix enhances the diversity of data by combining pat-
terns from different time series. Figure 2 shows example augmentations generated by TSMix and illustrates
how different patterns are mixed.
While TSMix improves pattern diversity, it may still prove insufficient for training a generalist time series
model, especially when real data is limited. To further supplement the training dataset, we propose Kernel-
Synth, a method to generate synthetic time series using Gaussian processes (GPs). KernelSynth is inspired
by the Automatic Statistician (Duvenaud et al., 2013), where a compositional search over a space of GP
kernels is performed to explain the structure of a time series. We use the inverse of this process — randomly
compose GP kernels to generate new time series.
GPs are distributions over functions defined by the mean function, m(t), and the positive definite kernel,
κ(t, t′ ), where t ∈ R is the domain. The kernel specifies a covariance function which defines the joint
variability of the function values at an arbitrary pair of points, (t, t′ ), in the input domain. Diverse patterns
can be generated by appropriately selecting the kernel. We constructed a kernel bank, K, of basis kernels
defining fundamental time series patterns. These include linear kernels for trend, RBF kernels for smooth
local variation, and periodic kernels for seasonalities found in typical time series frequencies. The final kernel,
κ̃(t, t′ ), is constructed by sampling j ∼ U{1, J} kernels from K with replacement and combining these kernels
via random binary operations, + or ×. A synthetic time series is generated by drawing a sample of length
lsyn from the GP prior, GP(m(t) = 0, κ̃(t, t′ )); see Algorithm 2 in Appendix A for details. Figure 3 depicts
this generative process used in KernelSynth, illustrating how time series with intricate patterns can arise
from the composition of simple basis kernels.
5 Experiments
In this section, we present empirical results on commonly used benchmark datasets. First, we give an
overview of the datasets, training strategy, baselines, and evaluation metrics (Section 5.1-5.4). Table 1
provides a high-level summary of the datasets and baselines used in our experiments. We then (a) evaluate
the performance of Chronos models in the in-domain and zero-shot settings against local models and
task-specific deep learning models (Section 5.5); (b) analyze the effect of various design choices such as
model size, initialization, synthetic data proportion, context length, and vocabulary size on the performance
of Chronos models (Section 5.6); and (c) analyze the qualitative performance of Chronos models and
7
Kernel Bank
∼
Linear
∼
Linear
∼ sample
kernels
RBF
Linear Linear × Linear (Linear × Linear)
∼
+
Periodic
Periodic
Periodic
⋯
Figure 3: (a) An illustration of KernelSynth, a Gaussian process (GP)-based synthetic time series generation method.
Kernels are sampled from a kernel bank and then randomly combined using a binary operator (× or +). The resultant
kernel is used in a GP prior to generate synthetic time series. Random samples from kernels at each step are shown
in red and blue colors. (b) Example synthetic time series generated by KernelSynth.
highlight their limitations (Section 5.7). We discuss our key findings in this section and relegate specific
experiment details to the appendices.
Table 1: A high-level summary of the datasets and baselines used in our experiments.
5.1 Datasets
To train and evaluate Chronos models, we collected a wide variety of publicly available datasets spanning
various application domains including energy, transport, healthcare, retail, web, weather, finance, and with
sampling frequencies ranging from 5 minutes up to yearly. The complete list of datasets, together with their
respective sources and additional details, is given in Appendix B. In total, our dataset collection comprises
55 datasets from multiple sources, including the Monash Time Series Forecasting Repository (Godahewa
et al., 2021), the M-competitions (Makridakis et al., 1979; Makridakis & Hibon, 2000; Makridakis et al.,
2020; 2022), and public domain datasets from Kaggle.
We categorize this collection into three subsets, based on how we use them for training and evaluating
Chronos models: (a) datasets exclusively used for training (13 datasets); (b) Benchmark I datasets, em-
ployed for both training and evaluation, representing an in-domain evaluation (15 datasets); and (c) Bench-
mark II datasets, used solely for evaluation, constituting a zero-shot evaluation (27 datasets). In categorizing
the datasets in this way, we tried to find a good balance between keeping as many datasets as possible for
the zero-shot evaluation of Chronos models, among the ones most commonly used in the literature, while
still having enough variety of domains and sampling frequencies in the training data. Overall, we used 28
datasets for training Chronos models, consisting of about 890K univariate time series with approximately
84B observations (tokens) in total. For both in-domain (I) and zero-shot (II) benchmark datasets, we used
the last H ∈ N+ observations of each time series as a held-out test set: all models are judged by the accuracy
of their forecast on such held-out set, which no model had access to for training purposes. The prediction
8
length H is task-specific (see Table 2 in Appendix B), where we define a task as a dataset and prediction
length pair. Tasks in both benchmarks exhibit diverse properties, in terms of the dataset size, frequency,
history length, and prediction length, making them rich benchmarks reflective of real world scenarios.
We selected T5 (Raffel et al., 2020) as the main architecture for Chronos in our experiments, since it is
available in a variety of sizes, ranging from 16M (Tiny) to 11B (XXL) parameters (Tay et al., 2021). We
also conducted experiments with the decoder-only GPT-2 model to demonstrate the applicability of the
Chronos framework to decoder-only models. In the following, we discuss the training configurations used
for our main results (Section 5.5) and explore alternatives for some of the hyperparameters in Section 5.6.
We trained T5 models of 4 sizes,1 namely, Mini (20M), Small (46M), Base (200M) and Large (710M), and the
GPT-2 base model (90M), on 10M TSMix augmentations (see Section 4.1) generated from the 28 training
datasets, with K = 3 in Algorithm 1, and 1M synthetic time series generated using Gaussian processes
(see Section 4.2). Note that with this setup, original time series are adequately represented since they are
included in the TSMix augmentations with probability 1/3. We sampled time series from the augmentations
and synthetic data in the ratio 9:1 during training. Each model is trained with an effective batch size of
256 sequences, using distributed data parallelism and gradient accumulation, whenever necessary. These
sequences are constructed by slicing random windows from the time series, and then scaling and quantizing
them into equal-sized bins within the interval [l= − 15, r=15], as described in Section 3.1. The context
length of the sequences was set to 512, the default for T5 models, and the prediction length is set to 64, a
value greater than the prediction lengths of all tasks we consider in our evaluation.
The models were optimized for 200K steps using the AdamW optimizer with a weight decay of 0.01. The
learning rate was annealed linearly from its initial value of 0.001 to 0 over the training steps. The other model
and training hyperparameters were set to their defaults used in the transformers library (Wolf et al., 2020).
We used an AWS EC2 instance with 8 A100 (40GB) GPUs to train all Chronos models, and we employed
faster floating point formats (TF32) and model compilation to speed up training. Table 5 in Appendix E
reports the training time and the approximate cost of training Chronos models of different sizes.
5.3 Baselines
We assessed the performance of Chronos models against a variety of time series forecasting baselines.
From statistical forecasting literature (Hyndman & Athanasopoulos, 2018), we included Naive, Seasonal
Naive, AutoETS, AutoARIMA (Hyndman et al., 2008) and AutoTheta (Assimakopoulos & Nikolopoulos,
2000). Additionally, we compared against several neural forecasting baselines, including WaveNet (Oord
et al., 2016), DeepAR (Salinas et al., 2020), N-BEATS (Oreshkin et al., 2020), TFT (Lim et al., 2021),
DLinear (Zeng et al., 2023), PatchTST (Nie et al., 2023), N-HiTS (Challu et al., 2023), and GPT4TS (Zhou
et al., 2023a). On Benchmark II (i.e., zero-shot datasets for Chronos models), we also evaluated against
ForecastPFN (Dooley et al., 2023) which is a pretrained transformer model trained only on synthetic time
series data.
We categorize Chronos models and the baselines into three groups: local models that estimate parameters
for each time series individually; task-specific models trained or fine-tuned for each task separately; and
pretrained models which do not perform task-specific training, instead using a single model across all tasks.
Further details on the implementation and training of these baselines can be found in Appendix C.
Whenever possible,2 we evaluated models both in terms of their probabilistic and point forecast performance.
We used the weighted quantile loss (WQL) to assess the quality of the probabilistic forecasts: the WQL is
related to the continuous ranked probability score (CRPS, Gneiting & Raftery (2007))3 and is commonly
1 Ourinference code and model checkpoints are available at https://github.com/amazon-science/chronos-forecasting.
2 Some models (GPT4TS and ForecastPFN) only generate point forecasts and we only evaluate those.
3 Many existing works (Ansari et al., 2021; Rasul et al., 2023; Kollovieh et al., 2023) use CRPS and WQL synonymously.
9
used to evaluate probabilistic forecasts (Gasthaus et al., 2019; Shchur et al., 2023). The WQL measures
the compatibility between the predictive distribution and the ground-truth observation at a uniformly-
spaced grid of quantile levels; we compute the WQL on 9 uniformly-spaced quantile levels {0.1, 0.2, . . . , 0.9}.
Quantile forecasters such as TFT were directly trained on these quantile levels. For methods requiring
sampling, we estimated the quantiles using 20 sample forecast paths. We used the mean absolute scaled
error (MASE, Hyndman & Koehler (2006)) to evaluate the point forecast performance. The MASE is defined
as the absolute error of the forecast scaled by the historical seasonal error of the time series, and was selected
due to its favorable properties over other point forecasting metrics (Hyndman & Koehler, 2006). We used the
median forecast (0.5-quantile) for computing the MASE for the probabilistic forecasters. See Appendix D
for a detailed discussion on the evaluation metrics.
Since the magnitude of the evaluation metrics can vary across datasets, we adopt a different approach to
aggregate scores than naive averaging. For each dataset, we compute the relative score of each model as
the model’s score divided by the score of a baseline model (here, Seasonal Naive). The relative scores are
aggregated across all datasets using the geometric mean. The choice of the geometric mean is deliberate —
Fleming & Wallace (1986) show that the arithmetic mean can yield misleading conclusions in this context,
and the geometric mean is provably the only meaningful way to aggregate such relative scores. Furthermore,
the geometric mean is also not sensitive to the choice of the baseline, and the model ordering stays intact
if another baseline is selected instead. We used Seasonal Naive due to its simplicity and popularity as a
forecasting baseline. For models that failed or could not finish evaluation within the allotted time on certain
datasets, we use a relative score of 1, i.e., the baseline relative score, when aggregating the results. We
assign equal weights to all tasks during aggregation, reflecting real-world scenarios where datasets may have
different numbers of time series, frequencies, history and prediction lengths.
In this section, we present our main results on 42 datasets, which comprise Benchmark I (15 datasets)
and Benchmark II (27 datasets). Chronos models surpass both classical statistical baselines and task-
specific deep learning models on the in-domain datasets (Benchmark I; see Section 5.5.1). On the zero-shot
datasets (Benchmark II; Section 5.5.2), Chronos models comfortably outperform statistical baselines, while
performing on par with the best deep learning models trained on these tasks. With an inexpensive fine-tuning
regimen, our Chronos-T5 (Small) model achieves the top spot on Benchmark II, significantly outperforming
all baselines.
Benchmark I comprises 15 datasets that were also part of the training data of Chronos models, i.e., this
benchmark evaluates the in-domain performance of Chronos models (see Table 2). Figure 4 summarizes
the probabilistic and point forecasting performance for all models on the held-out test windows, in terms
of their aggregated relative scores, computed as described in Section 5.4. The bigger Chronos-T5 models
(Base and Large) significantly outperform baseline models, obtaining the best aggregated relative scores and
average ranks (Figure 18 in Appendix E). These models not only perform better than local models (e.g.,
AutoETS and AutoARIMA), but they also perform better than task-specific deep learning models trained
or fine-tuned for each dataset (e.g., PatchTST and DeepAR).
The smaller Chronos-T5 models (Mini and Small) and Chronos-GPT2 also perform better than the
majority of baselines, with the exception of PatchTST. Task-specific deep learning models, trained across
multiple time series for a specific task, perform better than local statistical models that fit parameters for
each time series. Interestingly, the Seasonal Naive baseline performs competitively against other local models
on this benchmark, suggesting that the datasets in this benchmark exhibit strong seasonal patterns. This
is unsurprising since a majority of these datasets belong to domains such as energy and transport that tend
to be highly seasonal in nature. The raw WQL and MASE values for individual datasets summarized in
Figure 4 can be found in Tables 6 and 7 in Appendix E.
These results demonstrate the benefit of using models that are trained only once across multiple datasets, over
task-specific models trained individually for each task. Such models could streamline production forecasting
10
0 S G E P 1 S H I P W 8 E W O 7 T I G M J M G 1 S H I P W 4 V I X V E M R I H 1 S H I P W - R ( S Q E M R